java - MLKit 对象检测未检测到对象

标签 java android machine-learning android-camerax google-mlkit

Google 的 MLKit(没有 Firebase)是新的,所以我遇到了麻烦。我在这里尝试遵循此示例:https://developers.google.com/ml-kit/vision/object-detection/custom-models/android

应用程序可以正常打开,并且相机可以正常工作(例如,我可以看到东西)。但是实际检测好像不行。

我是否遗漏了实际检测对象的部分代码?还是 CameraX 或 ImageInput 的实现有问题?

package com.example.mlkitobjecttest;

import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;
import androidx.camera.core.Camera;
import androidx.camera.core.CameraSelector;
import androidx.camera.core.CameraX;
import androidx.camera.core.ImageAnalysis;
import androidx.camera.core.ImageProxy;
import androidx.camera.core.Preview;
import androidx.camera.core.impl.PreviewConfig;
import androidx.camera.lifecycle.ProcessCameraProvider;
import androidx.camera.view.PreviewView;
import androidx.core.app.ActivityCompat;
import androidx.core.content.ContextCompat;
import androidx.lifecycle.LifecycleOwner;

import android.content.pm.PackageManager;
import android.graphics.Rect;
import android.media.Image;
import android.os.Bundle;
import android.text.Layout;
import android.util.Rational;
import android.util.Size;
import android.view.View;
import android.widget.TextView;
import android.widget.Toast;

import com.google.android.gms.tasks.OnFailureListener;
import com.google.android.gms.tasks.OnSuccessListener;
import com.google.common.util.concurrent.ListenableFuture;
import com.google.mlkit.common.model.LocalModel;
import com.google.mlkit.vision.common.InputImage;
import com.google.mlkit.vision.objects.DetectedObject;
import com.google.mlkit.vision.objects.ObjectDetection;
import com.google.mlkit.vision.objects.ObjectDetector;
import com.google.mlkit.vision.objects.custom.CustomObjectDetectorOptions;

import org.w3c.dom.Text;

import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class MainActivity extends AppCompatActivity {

    private class YourAnalyzer implements ImageAnalysis.Analyzer {

        @Override
        @androidx.camera.core.ExperimentalGetImage
        public void analyze(ImageProxy imageProxy) {

            Image mediaImage = imageProxy.getImage();
            if (mediaImage != null) {
                InputImage image =
                        InputImage.fromMediaImage(mediaImage, imageProxy.getImageInfo().getRotationDegrees());
                // Pass image to an ML Kit Vision API
                // ...
                LocalModel localModel =
                        new LocalModel.Builder()
                                .setAssetFilePath("mobilenet_v1_1.0_128_quantized_1_default_1.tflite")
                                // or .setAbsoluteFilePath(absolute file path to tflite model)
                                .build();

                CustomObjectDetectorOptions customObjectDetectorOptions =
                        new CustomObjectDetectorOptions.Builder(localModel)
                                .setDetectorMode(CustomObjectDetectorOptions.SINGLE_IMAGE_MODE)
                                .enableMultipleObjects()
                                .enableClassification()
                                .setClassificationConfidenceThreshold(0.5f)
                                .setMaxPerObjectLabelCount(3)
                                .build();

                ObjectDetector objectDetector =
                        ObjectDetection.getClient(customObjectDetectorOptions);

                objectDetector
                        .process(image)
                        .addOnFailureListener(new OnFailureListener() {
                            @Override
                            public void onFailure(@NonNull Exception e) {
                                //Toast.makeText(getApplicationContext(), "Fail. Sad!", Toast.LENGTH_SHORT).show();
                                //textView.setText("Fail. Sad!");
                                imageProxy.close();
                            }
                        })
                        .addOnSuccessListener(new OnSuccessListener<List<DetectedObject>>() {
                            @Override
                            public void onSuccess(List<DetectedObject> results) {

                                for (DetectedObject detectedObject : results) {
                                    Rect box = detectedObject.getBoundingBox();


                                    for (DetectedObject.Label label : detectedObject.getLabels()) {
                                        String text = label.getText();
                                        int index = label.getIndex();
                                        float confidence = label.getConfidence();
                                        textView.setText(text);
                                        


                                }}
                                imageProxy.close();
                            }
                        });

            }
            //ImageAnalysis.Builder.fromConfig(new ImageAnalysisConfig).setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST);

        }

    }


    PreviewView prevView;
    private ListenableFuture<ProcessCameraProvider> cameraProviderFuture;
    private ExecutorService executor = Executors.newSingleThreadExecutor();
    TextView textView;

    private int REQUEST_CODE_PERMISSIONS = 101;
    private String[] REQUIRED_PERMISSIONS = new String[]{"android.permission.CAMERA"};
   /* @NonNull
    @Override
    public CameraXConfig getCameraXConfig() {
        return CameraXConfig.Builder.fromConfig(Camera2Config.defaultConfig())
                .setCameraExecutor(ContextCompat.getMainExecutor(this))
                .build();
    }
*/
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        prevView = findViewById(R.id.viewFinder);
        textView = findViewById(R.id.scan_button);

        if(allPermissionsGranted()){
            startCamera();
        }else{
            ActivityCompat.requestPermissions(this, REQUIRED_PERMISSIONS, REQUEST_CODE_PERMISSIONS);
        }

    }

    private void startCamera() {
        cameraProviderFuture = ProcessCameraProvider.getInstance(this);
        cameraProviderFuture.addListener(new Runnable() {
            @Override
            public void run() {
                try {
                    ProcessCameraProvider cameraProvider = cameraProviderFuture.get();
                    bindPreview(cameraProvider);
                } catch (ExecutionException | InterruptedException e) {
                    // No errors need to be handled for this Future.
                    // This should never be reached.
                }
            }
        }, ContextCompat.getMainExecutor(this));


    }

    void bindPreview(@NonNull ProcessCameraProvider cameraProvider) {

        Preview preview = new Preview.Builder()
                .build();

        CameraSelector cameraSelector = new CameraSelector.Builder()
                .requireLensFacing(CameraSelector.LENS_FACING_BACK)
                .build();

        preview.setSurfaceProvider(prevView.createSurfaceProvider());

        ImageAnalysis imageAnalysis =
                new ImageAnalysis.Builder()
                        .setTargetResolution(new Size(1280, 720))
                        .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
                        .build();
        imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this), new YourAnalyzer());

        Camera camera = cameraProvider.bindToLifecycle((LifecycleOwner)this, cameraSelector, preview, imageAnalysis);


    }



    private boolean allPermissionsGranted() {
        for(String permission: REQUIRED_PERMISSIONS){
            if(ContextCompat.checkSelfPermission(this, permission) != PackageManager.PERMISSION_GRANTED){
                return false;
            }
        }
        return true;
    }

    @Override
    public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {

        if(requestCode == REQUEST_CODE_PERMISSIONS){
            if(allPermissionsGranted()){
                startCamera();
            } else{
                Toast.makeText(this, "Permissions not granted by the user.", Toast.LENGTH_SHORT).show();
                this.finish();
            }
        }
    }

}

最佳答案

未检测到任何内容,因为您定义了错误的 tflite 模型文件路径。您的模拟器或物理设备无法解析给定路径,因为它在移动设备上不存在:C:\\Users\\dude\\Documents\\mlkitobjecttest\\app\\src\\main\\assets\\mobilenet_v1_1.0_128_quantized_1_default_1.tflite

将模型 mobilenet_v1_1.0_128_quantized_1_default_1.tflite 复制到应用程序项目 src/main 目录下的 assets 目录中。

如果您没有该目录,只需创建一个名为 assets 的新目录即可。

最后应该是这样的:

project's src directory strucutre

之后修复LocalModel初始化代码:

LocalModel localModel =
    new LocalModel.Builder()
    .setAssetFilePath("mobilenet_v1_1.0_128_quantized_1_default_1.tflite")
    // or .setAbsoluteFilePath(absolute file path to tflite model)
    .build();

更新:又发现一个问题

ImageAnalysis 实例未绑定(bind)到 CameraProvider:

...
ImageAnalysis imageAnalysis = ...
    
Camera camera = cameraProvider.bindToLifecycle((LifecycleOwner)this, cameraSelector, preview); // imageAnalysis is not used

要修复它,只需将 imageAnalysis 变量作为最后一个参数传递给 bindToLifecycle 方法:

Camera camera = cameraProvider.bindToLifecycle((LifecycleOwner)this, cameraSelector, preview, imageAnalysis);

第二次更新:发现另一个问题

MLKit 无法处理图像,因为它在处理时或处理开始前关闭。我说的是 imageProxy.close()public void analyze(ImageProxy imageProxy) 中声明的代码行。

close() 方法的 Java 文档:

/**
 * Free up this frame for reuse.
 * <p>
 * After calling this method, calling any methods on this {@code Image} will
 * result in an {@link IllegalStateException}, and attempting to read from
 * or write to {@link ByteBuffer ByteBuffers} returned by an earlier
 * {@link Plane#getBuffer} call will have undefined behavior. If the image
 * was obtained from {@link ImageWriter} via
 * {@link ImageWriter#dequeueInputImage()}, after calling this method, any
 * image data filled by the application will be lost and the image will be
 * returned to {@link ImageWriter} for reuse. Images given to
 * {@link ImageWriter#queueInputImage queueInputImage()} are automatically
 * closed.
 * </p>
 */

要解决此问题,请将 imageProxy.close() 移至失败和成功监听器中:

objectDetector
    .process(image)
    .addOnFailureListener(new OnFailureListener() {
        @Override
        public void onFailure(@NonNull Exception e) {
            Toast.makeText(getApplicationContext(), "Fail. Sad!", Toast.LENGTH_LONG).show();
            ...
            imageProxy.close();
        }
    })
    .addOnSuccessListener(new OnSuccessListener<List<DetectedObject>>() {
        @Override
        public void onSuccess(List<DetectedObject> results) {
            Toast.makeText(getBaseContext(), "Success...", Toast.LENGTH_LONG).show();
            ...
            imageProxy.close();
        }
    });

已使用 image classification model 测试固定解决方案来自 Tensorflow,测试成功。

关于java - MLKit 对象检测未检测到对象,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62606320/

相关文章:

machine-learning - Keras 自动编码器负损失和 val_loss 数据在 [-1 1] 范围内

machine-learning - 如何在tensorflow中使用自定义数据集?

java - 通过多个jsp页面倒计时

Android抽屉导航菜单项无法突出显示所选项目

machine-learning - 将 URL 分类 - 机器学习

Android开发在单击按钮时播放多个声音文件

Android日期解析(提取年月)

java - 选择多个双面

java - 递归计算字符串中的字符数

java - 如何更改 ResponseEntity 的时间戳字段格式?