在我的 Android 应用程序中,我正在以编程方式从后台服务捕获屏幕截图。我以位图
的形式获取它。
接下来,我使用以下 Android 框架 API 获取感兴趣区域 (ROI) 的坐标:
Rect ROI = new Rect();
viewNode.getBoundsInScreen(ROI);
在这里,getBoundsInScreen()
在 Android 中相当于 Javascript 函数 getBoundingClientRect()
.
一个Rect
在Android中具有以下属性:
rect.top
rect.left
rect.right
rect.bottom
rect.height()
rect.width()
rect.centerX() /* rounded off to integer */
rect.centerY()
rect.exactCenterX() /* exact value in float */
rect.exactCenterY()
What does top, left, right and bottom mean in Android Rect object
而 Rect
在OpenCV中具有以下属性
rect.width
rect.height
rect.x /* x coordinate of the top-left corner */
rect.y /* y coordinate of the top-left corner */
现在,在执行任何 OpenCV 相关操作之前,我们需要将 Android Rect
转换为 OpenCV Rect
。
Understanding how actually drawRect or drawing coordinates work in Android
有两种方法可以将 Android 矩形
转换为 OpenCV 矩形
(如 Karl Phillip 在他的回答中所建议的)。两者生成相同的值并产生相同的结果:
/* Compute the top-left corner using the center point of the rectangle. */
int x = androidRect.centerX() - (androidRect.width() / 2);
int y = androidRect.centerY() - (androidRect.height() / 2);
// OR simply use the already available member variables:
x = androidRect.left;
y = androidRect.top;
int w = androidRect.width();
int h = androidRect.height();
org.opencv.core.Rect roi = new org.opencv.core.Rect(x, y, w, h);
现在我正在执行的 OpenCV 操作之一是 blurring the ROI within the screenshot :
Mat originalMat = new Mat();
Bitmap configuredBitmap32 = originalBitmap.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(configuredBitmap32, originalMat);
Mat ROIMat = originalMat.submat(roi).clone();
Imgproc.GaussianBlur(ROIMat, ROIMat, new org.opencv.core.Size(0, 0), 5, 5);
ROIMat.copyTo(originalMat.submat(roi));
Bitmap blurredBitmap = Bitmap.createBitmap(originalMat.cols(), originalMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(originalMat, blurredBitmap);
This brings us very close to the desired result. Almost there, but not quite. The area just BENEATH the targeted region is blurred.
例如,如果目标感兴趣区域是密码字段,则上述代码会产生以下结果:
左侧为Microsoft Live 投资返回率,右侧为Pinterest 投资返回率:
可以看出,ROI下方的区域变得模糊。
So my question is, finally, why isn't the exact region of interest blurred?
- The co-ordinates obtained through the Android API
getBoundsInScreen()
appear to be correct.- Converting an Android
Rect
to an OpenCVRect
also appears to be correct. Or is it?- The code for blurring a region of interest also appears to be correct. Is there another way to do the same thing?
注意:我已经提供了实际的全尺寸屏幕截图。为了适应这篇文章,它们已经缩小了 50%,但除此之外,它们与我在 Android 设备上得到的一模一样。
最佳答案
如果我没记错的话,OpenCV的Rect
假设x
和y
指定矩形的左上角:
/* Compute the top-left corner using the center point of the rectangle
* TODO: take care of float to int conversion
*/
int x = androidRect.centerX() - (androidRect.width() / 2);
int y = androidRect.centerY() - (androidRect.height() / 2);
// OR simply use the already available member variables:
x = androidRect.left;
y = androidRect.top;
int w = androidRect.width();
int h = androidRect.height();
org.opencv.core.Rect roi = new org.opencv.core.Rect(x, y, w, h);
关于android - 在 Android 上模糊图像中的区域(在 Java 中),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60333766/