Basic Usage

Note

For a complete sample of real-time recognition of a video stream from the device's camera, see example app.

Before proceeding, make sure you have followed the steps described in Adding SDK and Authenticating Applications.

The most basic use of Live Sense SDK includes the detection of cars, pedestrians, signs, and other supported objects in a still image. For details on what can be detected by each model, see Models.

Live Sense SDK provides the ability to run multiple models as well as a single model by following the steps below:

  1. Create an Application class and initialize LiveSenseEngine inside:

     LiveSenseEngine liveSenseEngine = LiveSenseEngine.getInstance();
     //Initialize the LiveSenseEngine instance.
     liveSenseEngine.initialize(this.getApplicationContext());
    
  2. In your Activity class, ask the end-user for consent. This is required for non self-serve users only.

     // Full Activity details omitted for brevity
     public class CameraActivity extends Activity {
         private LSModelManagerParallel modelManager;
         private ManagerListener recognitionListener;
         private LSTrackerManagerImpl trackerManager;
         @Override
         protected void onCreate(Bundle savedInstanceState) {
             super.onCreate(savedInstanceState);
             LiveSenseEngine liveSenseEngine;
             liveSenseEngine = LiveSenseEngine.getInstance();
             //Ask User for consent.
             liveSenseEngine.requestConsent(this, new LiveSenseEngine.LSDConsentCallback() {
                 @Override
                 public void onAccept() {
                     // Save the response for future actions based on this
                     // Init Models
                     initModels();
                 }
    
                 @Override
                 public void onRefuse() {
                     // Save the response for future actions based on this
                     // Init Models
                     initModels();
                 }
             });
         }
     }
    
  3. Upon receiving a consent response, initialize the model manager LSModelManagerParallel and tracker manager LSTrackerManagerImpl.

    LSModelManagerParallel will take care of running all the required models parallely. LSTrackerManagerImpl is required to track the recognitions.

    Note

    • You can also write your own implementation of the model manager class that must implement LSModelManager.
    • You can also write your own implementation of the tracker manager class that must implement LSTrackerManager.
     public void initModels() {
    
         // Detector model
         modelManager = new LSModelManagerParallel(recognitionListener);
    
         // setting tracker manager
         trackerManager = new LSTrackerManagerImpl();
         modelManager.setTrackerManager(trackerManager);           
         // Initialize desired model class
         int roadBasicsModelId = modelManager.addModel(LiveSenseModel.ROAD_BASICS, new RoadBasicsModel.Options(), 0.6f);
         // Add Classwise Confidence for desired class.
         modelManager.addClassMinConfidence(LSClassLabel.PEDESTRIAN, 0.40f);
    
         // Add Classifier model
         RoadBasicsModel.Options rbOptions = new RoadBasicsModel.Options();
         rbOptions.setEnableTrafficLightStatus(true);
         modelManager.reloadModel(roadBasicsModelId, rbOptions);
    
         // Listener for manager events
         recognitionListener = new ManagerListener() {
             @Override
             public void onError(int modelId, Throwable throwable) {
                LOGGER.e("Error in inference with modelId: " +i+". \n"+ throwable.getMessage());
             }
    
             @Override
             public void onRecognitions(int modelId, int imageId, List<Recognition> list, long runTime) {
                 List<Recognition> recognitions = list;
                 // Process recognitions/classifications (filtering, tracking, etc.)
                 String tag = "";
                 if (modelId == roadBasicsModelId) {
                    tag = "RoadBasics";
                 }
                 for (Recognition recognition : recognitions) {
                    Log.d(tag, recognition.getTitle() + " at "
                            + recognition.getLocation() + " with confidence " + recognition.getConfidence());
                 }
    
                 // If you have initialized trackerManager, you can get list of tracked recognitions for further processing/display
                 List<TrackedRecognition> trackedRecognitions = trackerManager.getTrackedObjects();
    
             }
         };
     }
    
  4. Add required models to modelManager and prepare data to be passed for recognitions to Live Sense SDK.

     /**
     * Class to encapsulate recognition on images from LSDCamera2Controller.
     * Assumes ImageReader format is YUV_420_888.
     */
     public class CameraActivity extends Activity {
         private volatile boolean isModelLoaded = false;
         private int sensorOrientation;
         private int rotation;
         private LSModelManager modelManager;
         private ManagerListener recognitionListener;
         private LSTrackerManager trackerManager;
         private int roadBasicsModelId;
         private static final String TAG = "CameraActivity";
    
         @Override
         protected void onCreate(Bundle savedInstanceState) {
             super.onCreate(savedInstanceState);
             // Layout containing LSDCamera2Preview
             setContentView(R.layout.activity_camera);
             initModels();
         }
    
         public void initModels() {
    
             // Detector model
             modelManager = new LSModelManagerParallel(recognitionListener);
    
             // setting tracker manager
             trackerManager = new LSTrackerManagerImpl();
             modelManager.setTrackerManager(trackerManager);           
             // Initialize desired model class
             roadBasicsModelId = modelManager.addModel(LiveSenseModel.ROAD_BASICS, new RoadBasicsModel.Options(), 0.6f);
             // Add Classwise Confidence for desired class.
             modelManager.addClassMinConfidence(LSClassLabel.PEDESTRIAN, 0.40f);
    
             // Add Classifier model
             RoadBasicsModel.Options rbOptions = new RoadBasicsModel.Options();
             rbOptions.setEnableTrafficLightStatus(true);
             modelManager.reloadModel(roadBasicsModelId, rbOptions);
    
             isModelLoaded = true;
         }
    
         public void close() {
             if (modelManager != null) {
                 modelManager.close();
                 modelManager = null;
             }
             if (trackerManager != null) {
                 trackerManager.close();
                 trackerManager = null;
             }
             isModelLoaded = false;
         }
    
         /**
          * @param deviceRotation Device rotation from natural orientation as multiple of 90 degrees
          */
         public void setDeviceRotation(int deviceRotation) {
             // Assumes back facing camera
             this.rotation = (this.sensorOrientation - deviceRotation + 360) % 360;
         }
    
         // Listener for manager events
         private LSModelManager.ManagerListener recognitionListener = new LSModelManager.ManagerListener() {
             @Override
             public void onError(int modelId, Throwable throwable) {
                 Log.e(TAG, "Error in inference with modelId: " + modelId +". \n"+ throwable.getMessage());
             }
    
             @Override
             public void onRecognitions(int modelId, int imageId, List<Recognition> list, long runTime) {
                 List<Recognition> recognitions = list;
                 // Process recognitions/classifications (filtering, tracking, etc.)
                 String tag = "";
                 if (modelId == roadBasicsModelId) {
                     tag = "RoadBasics";
                 }
                 for (Recognition recognition : recognitions) {
                     Log.d(tag, recognition.getTitle() + " at "
                             + recognition.getLocation() + " with confidence " + recognition.getConfidence());
                 }
    
                 // If you have initialized trackerManager, you can get list of tracked recognitions for further processing/display
                 List<TrackedRecognition> trackedRecognitions = trackerManager.getTrackedObjects();
    
             }
         };
    
         private final LSDCamera2ImageListener imageAvailableCallback = (Image image) -> {
             if (image == null || !isModelLoaded) {
                 Log.w(TAG, "Manager not initialized.");
                 return;
             }
             if (image.getFormat() != ImageFormat.YUV_420_888) {
                 Log.w(TAG, "Unsupported image format.");
                 image.close();
                 return;
             }
             // Run recognition.
             modelManager.offerImage(image, rotation);
         };
     }
    
  5. To build applications, use a camera or other real-time image stream devices. Avoid using static resources.

    LSDCamera2Controller encapsulates the configuration and opening of the device's camera via the Camera2 API. Additionally LSDCamera2Preview is available for displaying a camera preview.

    Note

    Your application is responsible for ensuring that camera runtime permission has been granted before attempting to use LSDCamera2Controller.

    5.1. Basic setup of LSDCamera2Controller and LSDCamera2Preview:

     class CameraActivity : AppCompatActivity() {
    
         private lateinit var cameraController: LSDCamera2Controller
    
         private val imageListener = object : LSDCamera2ImageListener {
             override fun onImageAvailable(image: Image?) {
                 // TODO: Use image, auto-closed by controller
             }
         }
    
         override fun onCreate(savedInstanceState: Bundle?) {
             super.onCreate(savedInstanceState)
             // Layout containing LSDCamera2Preview
             setContentView(R.layout.activity_camera)
    
             // TODO: Ensure permissions have been granted!
    
             // Initialize the camera controller
             cameraController = LSDCamera2Controller(applicationContext)
             cameraController.initialize(LSDCamera2Config().apply {
                 this.targetStreamResolution = Size(1280, 960)
             })
             cameraController.setImageListener(WeakReference(imageListener))
    
             // Initialize preview
             val preview = findViewById<LSDCamera2Preview>(R.id.camera_preview)
             preview.initialize(cameraController)
         }
    
         override fun onResume() {
             super.onResume()
             // Open camera to begin receiving frames
             if (checkSelfPermission(Manifest.permission.CAMERA) == PERMISSION_GRANTED) {
                 cameraController.start()
             }
         }
    
         override fun onPause() {
             super.onPause()
             // Release camera on pause
             cameraController.stop()
         }
     }
    

    5.2. Layout of CameraActivity:

     <?xml version="1.0" encoding="utf-8"?>
     <FrameLayout 
         xmlns:android="http://schemas.android.com/apk/res/android"
         android:layout_width="match_parent"
         android:layout_height="match_parent">
    
         <com.here.see.livesense.ar_lib.camera2.LSDCamera2Preview
             android:id="@+id/camera_preview"
             android:layout_width="match_parent"
             android:layout_height="match_parent" />
     </FrameLayout>
    
  6. Helper methods to convert raw data into bitmap.

     // Create a Bitmap from ARGB pixel values.
     ImageUtils.argb8888ToBitmap(int[] argb, int width, int height);
    
     // [Optimized/Recommended] Populate a pre-allocated Bitmap from ARGB pixel values.
     ImageUtils.argb8888ToBitmap(int[] argb, int width, int height, Bitmap output);
    
     // Create a Bitmap from an ARGB byte[]   
     ImageUtils.argb8888ToBitmap(byte[] argb, int width, int height);
    
     // [Optimized/Recommended] Populate a pre-allocated Bitmap from an ARGB byte[].
     ImageUtils.argb8888ToBitmap(byte[] argb, Bitmap output);
    
     // Converts YUV420 semi-planar data to ARGB 8888 data using the supplied width and height
     ImageUtils.convertYUV420ToARGB8888(
             byte[] yData,
             byte[] uData,
             byte[] vData,
             int width,
             int height,
             int yRowStride,
             int uvRowStride,
             int uvPixelStride,
             int[] out)
    
     // Converts YUV420 semi-planar data to ARGB 8888 data using the supplied width and height.
     ImageUtils.convertYUV420SPToARGB8888(
             byte[] input,
             int width,
             int height,
             int[] output)
    
  7. Test a detection call using a static image:

     public void runDetection() {
         // Retrieve RGB bitmap for image in app resources (or from some stream)
         BitmapFactory.Options options = new BitmapFactory.Options();
         options.inScaled = false;
         Bitmap image = BitmapFactory.decodeResource(this.getResources(), R.drawable.test_image, options);
         // Run recognition on nonrotated image
         modelManager.offerImage(image, 0);
    
         // Return current list of tracked recognitions for further processing/displaying
         List<TrackedRecognition> trackedRecognitions = trackerManager.getTrackedObjects();
         // Process/display recognitions inside `recognitionListener` callback
     }
    
  8. It is recommended to call the following SDK lifecycle methods from the application activity lifecycle:

    • LiveSenseEngine.getInstance().onResume()
    • LiveSenseEngine.getInstance().onPause()

      It helps to manage the SDK resources which saves the device memory and battery usage.

      @Override
      protected void onPause() {
        super.onPause();
        LiveSenseEngine.getInstance().onPause();
      }
      
      @Override
      protected void onResume() {
        super.onResume();
        LiveSenseEngine.getInstance().onResume();
      }
      
  9. Release the resources when exiting the application.

     @Override
     protected void onDestroy() {
         super.onDestroy();
         // Release model resources when done
         modelManager.close();
         modelManager = null;
     }
    

For recommendations on using the SDK in your application, see Recommendations

For more functionality built upon the Live Sense core, see Utility Libraries

results matching ""

    No results matching ""