New Solutions To Speed Up Performance
More than 5 billion people around the world have a mobile phone connection, according to a recent study by GSMA Intelligence. We can’t imagine our lives without mobile phones, which are gradually turning from convenient everyday tools into things like medical devices. It’s not surprising that many companies are developing AI tools for mobile devices.
But there’s a serious shortage of professionals in the AI industry with multi-field expertise. An engineer may be a great mobile software developer but lacks knowledge in machine learning—or conversely, an ML engineer or data scientist may not have experience with mobile software.
Many companies are working to address this issue with solutions like doc.ai’s Net Runner, a device environment for computer vision that will be presented at NeurIPS at the “Machine Learning on the Phone and Other Consumer Devices” workshop on Friday, December 7th.
Net Runner allows data scientists with no mobile software experience to prototype and evaluate models quickly on mobile devices using TensorIO, which implements a layer of abstraction on top of an existing library like TensorFlow Lite. With these cutting edge tools, data scientists with little knowledge of TensorFlow can add their models to the application and see the output in real time.
Models can be measured for latency and accuracy, run on hundreds or thousands of images in iPhoto albums, and are automatically tested. TensorIO combined with Net Runner parses a description of the underlying model in a JSON file format and then uses platform-specific APIs to perform the required image operations. Since the declarative specification is plain text and platform independent, it can use language that most machine learning researchers have already worked with–types, shapes, and transformations.
Why Machine Learning Will Go on Mobile Devices
Today, most computing is centralized and happens in the cloud powered by tech giants like Amazon, Microsoft, Google, and IBM. At the same time, privacy and security can be at risk. So we created the infrastructure for mobile devices to advance data scientists’ work with models.
The on-device based work has these clear advantages:
Speed. It can take up to 20 seconds to run a model in the cloud, but if you run the same model on a device, it takes a few milliseconds,significantly reducing latency. The model can also be tested offline to bring even more convenience to your work.
Privacy. By working on a device, there’s no need to share your data with third parties.
Security. Doc.ai is driven by the mission to bring decentralized machine intelligence into the market to ensure much higher security in the field.
Net Runner has helped doc.ai develop and iterate on its Phenomenal Face Selfie-to-BMI mobile model, trained on Tesla V100 NVIDIA GPUs, though more recent NVIDIA solutions like TITAN RTX or RAPIDS can vastly contribute to the development of mobile device tools.
Supporting the spirit of open sourcing in the AI community, doc.ai deployed the Net Runner and TensorIO code at a GitHub repository. Currently, Net Runner supports TensorFlow Lite models and works on iOS 9.3 or higher.
Stop by the doc.ai booth #108 and visit the workshop “Machine Learning on the Phone and other Consumer Devices” to learn more details about on-device solutions. More