


At the top, you'll find the model type, its size, and the operating system requirements. Just double-click on any model, and it will bring up the following. A great way to learn about a Core ML model is by opening it in Xcode. This includes understanding the model's accuracy as well as its performance. You also may have multiple candidates of models you'd like to select from, but how do you decide which one to use? You need to have a model whose functionality will match the requirements of the feature you wish to enable. There are many aspects of a model that you may want to consider when deciding if you should use that model within your app. The last step is to optimize the way you use Core ML. This involves bundling the model with your application and using the Core ML APIs to load and run inference on that model during your app's execution. The next step is to integrate that model into your app. For more details on model conversion or to learn about Create ML, I recommend checking out these sessions. This may be done in a variety of ways, such as using Core ML tools to convert a PyTorch or TensorFlow model to Core ML format, using an already-existing Core ML model, or using Create ML to train and export your model. To give some background, I'll start by summarizing the standard workflow when using Core ML within your app. And lastly, I'll give an overview of some additional Core ML capabilities and integration options. Then I'll go over some enhanced APIs which will enable you to make those optimizations. In this session, I'll go over performance tools that are now available to give you the information you need to understand and optimize your model's performance when using Core ML. The focus of these features is to help you optimize your Core ML usage. Today I'm going to show some of the exciting new features being added to Core ML. ♪ Mellow instrumental hip-hop music ♪ ♪ Hi, my name is Ben, and I'm an engineer on the Core ML team.
