Mlkit от google java

Detect and track objects with ML Kit on Android

You can use ML Kit to detect and track objects in successive video frames.

When you pass an image to ML Kit, it detects up to five objects in the image along with the position of each object in the image. When detecting objects in video streams, each object has a unique ID that you can use to track the object from frame to frame. You can also optionally enable coarse object classification, which labels objects with broad category descriptions.

This API uses an unbundled library that must be downloaded before use. See this guide for more information.

Try it out

  • Play around with the sample app to see an example usage of this API.
  • See the Material Design showcase app for an end-to-end implementation of this API.

Before you begin

This API requires Android API level 19 or above. Make sure that your app’s build file uses a minSdkVersion value of 19 or higher.

1. Configure the object detector

To detect and track objects, first create an instance of ObjectDetector and optionally specify any detector settings that you want to change from the default.

  1. Configure the object detector for your use case with an ObjectDetectorOptions object. You can change the following settings:
    Object Detector Settings
    Detection mode STREAM_MODE (default) | SINGLE_IMAGE_MODE In STREAM_MODE (default), the object detector runs with low latency, but might produce incomplete results (such as unspecified bounding boxes or category labels) on the first few invocations of the detector. Also, in STREAM_MODE , the detector assigns tracking IDs to objects, which you can use to track objects across frames. Use this mode when you want to track objects, or when low latency is important, such as when processing video streams in real time. In SINGLE_IMAGE_MODE , the object detector returns the result after the object’s bounding box is determined. If you also enable classification it returns the result after the bounding box and category label are both available. As a consequence, detection latency is potentially higher. Also, in SINGLE_IMAGE_MODE , tracking IDs are not assigned. Use this mode if latency isn’t critical and you don’t want to deal with partial results.
    Detect and track multiple objects false (default) | true Whether to detect and track up to five objects or only the most prominent object (default).
    Classify objects false (default) | true Whether or not to classify detected objects into coarse categories. When enabled, the object detector classifies objects into the following categories: fashion goods, food, home goods, places, and plants.

    The object detection and tracking API is optimized for these two core use cases:

    • Live detection and tracking of the most prominent object in the camera viewfinder.
    • The detection of multiple objects from a static image.

To configure the API for these use cases:

Kotlin

// Live detection and tracking val options = ObjectDetectorOptions.Builder() .setDetectorMode(ObjectDetectorOptions.STREAM_MODE) .enableClassification() // Optional .build() // Multiple object detection in static images val options = ObjectDetectorOptions.Builder() .setDetectorMode(ObjectDetectorOptions.SINGLE_IMAGE_MODE) .enableMultipleObjects() .enableClassification() // Optional .build()

Java

// Live detection and tracking ObjectDetectorOptions options = new ObjectDetectorOptions.Builder() .setDetectorMode(ObjectDetectorOptions.STREAM_MODE) .enableClassification() // Optional .build(); // Multiple object detection in static images ObjectDetectorOptions options = new ObjectDetectorOptions.Builder() .setDetectorMode(ObjectDetectorOptions.SINGLE_IMAGE_MODE) .enableMultipleObjects() .enableClassification() // Optional .build();

Kotlin

val objectDetector = ObjectDetection.getClient(options)

Java

ObjectDetector objectDetector = ObjectDetection.getClient(options);

2. Prepare the input image

The object detector runs directly from a Bitmap , NV21 ByteBuffer or a YUV_420_888 media.Image . Constructing an InputImage from those sources are recommended if you have direct access to one of them. If you construct an InputImage from other sources, we will handle the conversion internally for you and it might be less efficient.

For each frame of video or image in a sequence, do the following:

You can create an InputImage object from different sources, each is explained below.

Using a media.Image

To create an InputImage object from a media.Image object, such as when you capture an image from a device’s camera, pass the media.Image object and the image’s rotation to InputImage.fromMediaImage() .

If you use the CameraX library, the OnImageCapturedListener and ImageAnalysis.Analyzer classes calculate the rotation value for you.

Kotlin

private class YourImageAnalyzer : ImageAnalysis.Analyzer < override fun analyze(imageProxy: ImageProxy) < val mediaImage = imageProxy.image if (mediaImage != null) < val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees) // Pass image to an ML Kit Vision API // . >> >

Java

private class YourAnalyzer implements ImageAnalysis.Analyzer < @Override public void analyze(ImageProxy imageProxy) < Image mediaImage = imageProxy.getImage(); if (mediaImage != null) < InputImage image = InputImage.fromMediaImage(mediaImage, imageProxy.getImageInfo().getRotationDegrees()); // Pass image to an ML Kit Vision API // . >> >

If you don’t use a camera library that gives you the image’s rotation degree, you can calculate it from the device’s rotation degree and the orientation of camera sensor in the device:

Kotlin

Java

private static final SparseIntArray ORIENTATIONS = new SparseIntArray(); static < ORIENTATIONS.append(Surface.ROTATION_0, 0); ORIENTATIONS.append(Surface.ROTATION_90, 90); ORIENTATIONS.append(Surface.ROTATION_180, 180); ORIENTATIONS.append(Surface.ROTATION_270, 270); >/** * Get the angle by which an image must be rotated given the device's current * orientation. */ @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP) private int getRotationCompensation(String cameraId, Activity activity, boolean isFrontFacing) throws CameraAccessException < // Get the device's current rotation relative to its "native" orientation. // Then, from the ORIENTATIONS table, look up the angle the image must be // rotated to compensate for the device's rotation. int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation(); int rotationCompensation = ORIENTATIONS.get(deviceRotation); // Get the device's sensor orientation. CameraManager cameraManager = (CameraManager) activity.getSystemService(CAMERA_SERVICE); int sensorOrientation = cameraManager .getCameraCharacteristics(cameraId) .get(CameraCharacteristics.SENSOR_ORIENTATION); if (isFrontFacing) < rotationCompensation = (sensorOrientation + rotationCompensation) % 360; >else < // back-facing rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360; >return rotationCompensation; >

Then, pass the media.Image object and the rotation degree value to InputImage.fromMediaImage() :

Kotlin

val image = InputImage.fromMediaImage(mediaImage, rotation)
MLKitVisionImage.kt

Java

InputImage image = InputImage.fromMediaImage(mediaImage, rotation);

Using a file URI

To create an InputImage object from a file URI, pass the app context and file URI to InputImage.fromFilePath() . This is useful when you use an ACTION_GET_CONTENT intent to prompt the user to select an image from their gallery app.

Kotlin

val image: InputImage try < image = InputImage.fromFilePath(context, uri) >catch (e: IOException) < e.printStackTrace() >

MLKitVisionImage.kt

Java

InputImage image; try < image = InputImage.fromFilePath(context, uri); >catch (IOException e)

Using a ByteBuffer or ByteArray

To create an InputImage object from a ByteBuffer or a ByteArray , first calculate the image rotation degree as previously described for media.Image input. Then, create the InputImage object with the buffer or array, together with image's height, width, color encoding format, and rotation degree:

Kotlin

val image = InputImage.fromByteBuffer( byteBuffer, /* image width */ 480, /* image height */ 360, rotationDegrees, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 )
MLKitVisionImage.kt
// Or: val image = InputImage.fromByteArray( byteArray, /* image width */ 480, /* image height */ 360, rotationDegrees, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 )
MLKitVisionImage.kt

Java

InputImage image = InputImage.fromByteBuffer(byteBuffer, /* image width */ 480, /* image height */ 360, rotationDegrees, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 );
MLKitVisionImage.java
// Or: InputImage image = InputImage.fromByteArray( byteArray, /* image width */480, /* image height */360, rotation, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 );
MLKitVisionImage.java

Using a Bitmap

To create an InputImage object from a Bitmap object, make the following declaration:

Kotlin

val image = InputImage.fromBitmap(bitmap, 0)
MLKitVisionImage.kt

Java

InputImage image = InputImage.fromBitmap(bitmap, rotationDegree);
MLKitVisionImage.java

The image is represented by a Bitmap object together with rotation degrees.

3. Process the image

Kotlin

objectDetector.process(image) .addOnSuccessListener < detectedObjects ->// Task completed successfully // . > .addOnFailureListener < e ->// Task failed with an exception // . >

Java

objectDetector.process(image) .addOnSuccessListener( new OnSuccessListener>() < @Override public void onSuccess(ListdetectedObjects) < // Task completed successfully // . >>) .addOnFailureListener( new OnFailureListener() < @Override public void onFailure(@NonNull Exception e) < // Task failed with an exception // . >>);

Note: If you are using the CameraX API, make sure to close the ImageProxy when finish using it, e.g., by adding an OnCompleteListener to the Task returned from the process method. See the VisionProcessorBase class in the quickstart sample app for an example.

4. Get information about detected objects

If the call to process() succeeds, a list of DetectedObject s is passed to the success listener.

Note: In streaming mode, the object detector might need to process 30 or more frames, depending on device performance, before it detects the first object.

Each DetectedObject contains the following properties:

Kotlin

for (detectedObject in detectedObjects) < val boundingBox = detectedObject.boundingBox val trackingId = detectedObject.trackingId for (label in detectedObject.labels) < val text = label.text if (PredefinedCategory.FOOD == text) < . >val index = label.index if (PredefinedCategory.FOOD_INDEX == index) < . >val confidence = label.confidence > >

Java

// The list of detected objects contains one item if multiple // object detection wasn't enabled. for (DetectedObject detectedObject : detectedObjects) < Rect boundingBox = detectedObject.getBoundingBox(); Integer trackingId = detectedObject.getTrackingId(); for (Label label : detectedObject.getLabels()) < String text = label.getText(); if (PredefinedCategory.FOOD.equals(text)) < . >int index = label.getIndex(); if (PredefinedCategory.FOOD_INDEX == index) < . >float confidence = label.getConfidence(); > >

Ensuring a great user experience

For the best user experience, follow these guidelines in your app:

  • Successful object detection depends on the object's visual complexity. In order to be detected, objects with a small number of visual features might need to take up a larger part of the image. You should provide users with guidance on capturing input that works well with the kind of objects you want to detect.
  • When you use classification, if you want to detect objects that don't fall cleanly into the supported categories, implement special handling for unknown objects.

Improving performance

If you want to use object detection in a real-time application, follow these guidelines to achieve the best framerates:

  • When you use streaming mode in a real-time application, don't use multiple object detection, as most devices won't be able to produce adequate framerates.
  • Disable classification if you don't need it.
  • If you use the Camera or camera2 API, throttle calls to the detector. If a new video frame becomes available while the detector is running, drop the frame. See the VisionProcessorBase class in the quickstart sample app for an example.
  • If you use the CameraX API, be sure that backpressure strategy is set to its default value ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST . This guarantees only one image will be delivered for analysis at a time. If more images are produced when the analyzer is busy, they will be dropped automatically and not queued for delivery. Once the image being analyzed is closed by calling ImageProxy.close(), the next latest image will be delivered.
  • If you use the output of the detector to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. This renders to the display surface only once for each input frame. See the CameraSourcePreview and GraphicOverlay classes in the quickstart sample app for an example.
  • If you use the Camera2 API, capture images in ImageFormat.YUV_420_888 format. If you use the older Camera API, capture images in ImageFormat.NV21 format.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2023-07-24 UTC.

Источник

Читайте также:  Python как обработать строки
Оцените статью