What should you do?
You work as an ML engineer at a social media company, and you are developing a visual filter for users’ profile photos. This requires you to train an ML model to detect bounding boxes around human faces. You want to use this filter in your company’s iOS-based mobile phone application. You want to minimize code development and want the model to be optimized for inference on mobile phones.
What should you do?
A . Train a model using AutoML Vision and use the “export for Core ML” option.
B . Train a model using AutoML Vision and use the “export for Coral” option.
C . Train a model using AutoML Vision and use the “export for TensorFlow.js” option.
D . Train a custom TensorFlow model and convert it to TensorFlow Lite (TFLite).
Answer: A
Explanation:
AutoML Vision is a Google Cloud service that allows you to train custom ML models for image classification, object detection, and segmentation without writing any code. You can use AutoML Vision to upload your training data, label it, and train a model using a graphical user interface. You can also evaluate the model’s performance and export it for deployment. One of the export options is Core ML, which is a framework that lets you integrate ML models into iOS applications. Core ML optimizes the model for on-device performance, power efficiency, and minimal memory footprint. By using AutoML Vision and Core ML, you can minimize code development and have a model that is optimized for inference on mobile phones.
Reference: AutoML Vision documentation
Core ML documentation
Latest Professional Machine Learning Engineer Dumps Valid Version with 60 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund