Single Model Usage

If the requirement is to run only a single model for detections, then this approach can also be used. But we still recommend the approach described in Basic Usage.

To enable the single model usage, do the following:

  1. Create an Application class and initialize LiveSenseEngine inside:

      LiveSenseEngine liveSenseEngine = LiveSenseEngine.getInstance();
      //Initialize the LiveSenseEngine instance.
      liveSenseEngine.initialize(this.getApplicationContext());
    
  2. In your Activity class, ask the end-user for consent. It is required for non self-serve users only.

     // Full Activity details omitted for brevity
     public class MainActivity extends Activity {
         private RoadBasicsModel roadBasicsModel;
    
         @Override
         protected void onCreate(Bundle savedInstanceState) {
             super.onCreate(savedInstanceState);
             LiveSenseEngine liveSenseEngine = null;
             try {
                 liveSenseEngine = LiveSenseEngine.getInstance();
                 //Ask User for consent.
                 liveSenseEngine.requestConsent(this, new LiveSenseEngine.LSDConsentCallback() {
    
                     @Override
                     public void onAccept() {
                         // Save the response for future actions based on this
                         // Init Models
                         initModels();
                     }
    
                     @Override
                     public void onRefuse() {
                         // Save the response for future actions based on this
                         // Init Models
                         initModels();
                     }
                 });
             } catch (AuthorizationException e) {
                 e.printStackTrace();
             }
         }
     }
    
  3. Upon receiving a consent response, you can initialize the models: ```java public void initModels() {

     // Initialize desired model class
     try {
         // Model options
         RoadBasicsModel.Options rbOptions = new RoadBasicsModel.Options();
         // Classifier model
         rbOptions.setEnableTrafficLightStatus(true);
         // Detector model
         roadBasicsModel = new RoadBasicsModel(rbOptions);
         roadBasicsModel.addClassMinConfidence("pedestrian", 0.45f);
    
    } catch (IOException e) {
        // Failed to initialize model
    } catch (AuthorizationException e) {
        // Missing or invalid credentials
    }
}
```
  1. To build applications, you will use a camera or other real-time image stream instead of static resources.

    LSDCamera2Controller encapsulates the configuration and opening of the device's camera via the Camera2 API. Additionally LSDCamera2Preview is available for displaying a camera preview.

    Note

    Your application is responsible for ensuring that camera runtime permission has been granted before attempting to use LSDCamera2Controller.

    4.1. Basic setup of LSDCamera2Controller and LSDCamera2Preview:

     class CameraActivity : AppCompatActivity() {
    
         private lateinit var cameraController: LSDCamera2Controller
    
         private val imageListener = object : LSDCamera2ImageListener {
             override fun onImageAvailable(image: Image?) {
                 // TODO: Use image, auto-closed by controller
             }
         }
    
         override fun onCreate(savedInstanceState: Bundle?) {
             super.onCreate(savedInstanceState)
             // Layout containing LSDCamera2Preview
             setContentView(R.layout.activity_camera)
    
             // TODO: Ensure permissions have been granted!
    
             // Initialize the camera controller
             cameraController = LSDCamera2Controller(applicationContext)
             cameraController.initialize(LSDCamera2Config().apply {
                 this.targetStreamResolution = Size(1280, 960)
             })
             cameraController.setImageListener(WeakReference(imageListener))
    
             // Initialize preview
             val preview = findViewById<LSDCamera2Preview>(R.id.camera_preview)
             preview.initialize(cameraController)
         }
    
         override fun onResume() {
             super.onResume()
             // Open camera to begin receiving frames
             if (checkSelfPermission(Manifest.permission.CAMERA) == PERMISSION_GRANTED) {
                 cameraController.start()
             }
         }
    
         override fun onPause() {
             super.onPause()
             // Release camera on pause
             cameraController.stop()
         }
     }
    

    4.2. Layout of CameraActivity:

     ```xml
     <?xml version="1.0" encoding="utf-8"?>
     <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
         android:layout_width="match_parent"
         android:layout_height="match_parent">
    
         <com.here.see.livesense.ar_lib.camera2.LSDCamera2Preview
             android:id="@+id/camera_preview"
             android:layout_width="match_parent"
             android:layout_height="match_parent" />
     </FrameLayout>
     ```
    
  2. Prepare data to be passed for recognitions to Live Sense SDK:

     /**
     * Class to encapsulate recognition on images from ImageReader connected to Camera2 or other source.
     * Assumes ImageReader format is YUV_420_888.
     */
     public class RecognitionFromCamera implements ImageReader.OnImageAvailableListener {
         private HandlerThread detectionThread;
         private Handler detectionHandler;
         private RoadBasicsModel roadBasicsModel;
         private volatile boolean isProcessingFrame = false;
         private volatile boolean isModelLoaded = false;
         private int previewWidth;
         private int previewHeight;
         private int sensorOrientation;
         private int rotation;
         private byte[][] yuvBytes = new byte[3][];
         private int[] rgbBytes;
         private Bitmap bitmap;
    
         /**
          * @param previewWidth      Width of ImageReader
          * @param previewHeight     Height of ImageReader
          * @param sensorOrientation Orientation reported by camera, see CameraCharacteristics.SENSOR_ORIENTATION
          */
         public void init(int previewWidth, int previewHeight, int sensorOrientation) {
             this.previewWidth = previewWidth;
             this.previewHeight = previewHeight;
             this.sensorOrientation = sensorOrientation;
             // Start background thread
             detectionThread = new HandlerThread("detection");
             detectionThread.start();
             detectionHandler = new Handler(detectionThread.getLooper());
    
             // Initialize model on detection thread
             detectionHandler.post(() -> {
                 try {
                     // Model options
                     RoadBasicsModel.Options rbOptions = new RoadBasicsModel.Options();
                     // Classifier model
                     rbOptions.setEnableTrafficLightStatus(true);
                     // Detector model
                     roadBasicsModel = new RoadBasicsModel(rbOptions);
                     roadBasicsModel.addClassMinConfidence("pedestrian", 0.45f);
    
                     isModelLoaded = true;
                 } catch (IOException e) {
                     // Failed to initialize model
                 } catch (AuthorizationException e) {
                     // Missing or invalid credentials
                 }
                 // Buffer for RGB bytes after conversion from YUV
                 rgbBytes = new int[previewWidth * previewHeight];
                 // RGB bitmap for models
                 bitmap = Bitmap.createBitmap(previewWidth, previewHeight, Bitmap.Config.ARGB_8888);
             });
         }
    
         public void close() {
             if (detectionThread != null) {
                 detectionThread.quitSafely();
                 detectionThread = null;
                 detectionHandler = null;
             }
             if (roadBasicsModel != null) {
                 roadBasicsModel.close();
                 roadBasicsModel = null;
             }
             isModelLoaded = false;
         }
    
         /**
          * @param deviceRotation Device rotation from natural orientation as multiple of 90 degrees
          */
         public void setDeviceRotation(int deviceRotation) {
             // Assumes back facing camera
             this.rotation = (this.sensorOrientation - deviceRotation + 360) % 360;
         }
    
         @Override
         public void onImageAvailable(final ImageReader reader) {
             final Image image = reader.acquireLatestImage();
             if (image == null || !isModelLoaded) {
                 return;
             }
             if (isProcessingFrame) {
                 image.close();
                 return;
             }
             isProcessingFrame = true;
             // Buffer image YUV bytes and close image
             final Image.Plane[] planes = image.getPlanes();
             final int yRowStride = planes[0].getRowStride();
             final int uvRowStride = planes[1].getRowStride();
             final int uvPixelStride = planes[1].getPixelStride();
    
             for (int i = 0; i < planes.length; ++i) {
                 final ByteBuffer buffer = planes[i].getBuffer();
                 if (yuvBytes[i] == null) {
                     yuvBytes[i] = new byte[buffer.capacity()];
                 }
                 buffer.get(yuvBytes[i]);
             }
             image.close();
    
             // Run detection on dedicated background thread to avoid blocking ImageReader's thread
             detectionHandler.post(() -> {
                 // Convert YUV to ARGB bitmap
                 ImageUtils.convertYUV420ToARGB8888(
                         yuvBytes[0],
                         yuvBytes[1],
                         yuvBytes[2],
                         previewWidth,
                         previewHeight,
                         yRowStride,
                         uvRowStride,
                         uvPixelStride,
                         rgbBytes);
                 // bitmap.setPixels(rgbBytes, 0, previewWidth, 0, 0, previewWidth, previewHeight);
                 // Alternatively, you can use ImageUtils helper methods to get bitmap.
                 ImageUtils.argb8888ToBitmap(rgbBytes, previewWidth, previewHeight, bitmap);
    
                 // Run recognition
                 List<Recognition> recognitions = roadBasicsModel.recognizeImage(bitmap, rotation, 0.6f);
                 // Process recognitions (filtering, tracking, etc.)
                 for (Recognition recognition : recognitions) {
                     Log.d("RoadBasics", recognition.getTitle() + " at "
                             + recognition.getLocation() + " with confidence " + recognition.getConfidence());
                 }
              isProcessingFrame = false;
             });
         }
     }
    
  3. Helper methods to convert raw data into bitmap.

     // Create a Bitmap from ARGB pixel values.
     ImageUtils.argb8888ToBitmap(int[] argb, int width, int height);
    
     // [Optimized/Recommended] Populate a pre-allocated Bitmap from ARGB pixel values.
     ImageUtils.argb8888ToBitmap(int[] argb, int width, int height, Bitmap output);
    
     // Create a Bitmap from an ARGB byte[]   
     ImageUtils.argb8888ToBitmap(byte[] argb, int width, int height);
    
     // [Optimized/Recommended] Populate a pre-allocated Bitmap from an ARGB byte[].
     ImageUtils.argb8888ToBitmap(byte[] argb, Bitmap output);
    
     // Converts YUV420 semi-planar data to ARGB 8888 data using the supplied width and height
     ImageUtils.convertYUV420ToARGB8888(
             byte[] yData,
             byte[] uData,
             byte[] vData,
             int width,
             int height,
             int yRowStride,
             int uvRowStride,
             int uvPixelStride,
             int[] out)
    
     // Converts YUV420 semi-planar data to ARGB 8888 data using the supplied width and height.
     ImageUtils.convertYUV420SPToARGB8888(
             byte[] input,
             int width,
             int height,
             int[] output)
    
  4. Run a detection call:

     public void runDetection() {
         // Retrieve RGB bitmap for image in app resources (or from some stream)
         Bitmap image = BitmapFactory.decodeResource(this.getResources(), R.drawable.test_image);
         // Run recognition on nonrotated image with minimum recognition confidence of 60%
         List<Recognition> detections = roadBasicsModel.recognizeImage(image, 0, 0.6f);
         // Process/display recognitions
     }
    
  5. It is recommended to call the LiveSenseEngine.getInstance().onResume() method when you come to the foreground from the background to resume the services paused while in the background. Call the LiveSenseEngine.getInstance().onPause() method when the application goes to background. It helps to stop the services which saves the device memory and battery usage.

     @Override
     protected void onPause() {
         super.onPause();
         LiveSenseEngine.getInstance().onPause();
     }
    
     @Override
     protected void onResume() {
         super.onResume();
         LiveSenseEngine.getInstance().onResume();
     }
    
  6. Release the resources when exiting the application.

     @Override
     protected void onDestroy() {
         super.onDestroy();
         // Release model resources when done
         roadBasicsModel.close();
     }
    

Threading

The Live Sense models are not thread-safe and should be both initialized and executed on the same thread. Utilizing multiple threads for the same model instance may result in unexpected behavior.

Models may be executed in parallel, but each model instance can only handle one image at a time. Executing a model before the previous call has completed will result in an exception.

For recommendations on using the Live Sense SDK in your application, see Recommendations.

For more functionality built upon the Live Sense core, see Utility Libraries.

results matching ""

    No results matching ""