Home > Knowledge > Content
Let embedded devices implement machine learning!
- Jul 12, 2018 -

NXP Semiconductors has announced an easy-to-use, generalized machine learning development environment for building innovative applications with high-end features. Now, for NXP's low-cost microcontrollers (MCUs) to groundbreaking cross-border i.MX RT processors and high-performance application processors, customers can easily implement machine learning.

The machine learning development environment provides a full-featured, off-the-shelf solution that allows users to select the best execution engine from the ARM Cortex core to a high-performance GPU/DSP (graphics processing unit/digital signal processor) complex, and also provides deployment on these engines. A tool for machine learning models (including neural networks).

Embedded artificial intelligence (AI) is rapidly becoming the basic technical capability of edge processing, enabling "smart" devices to "recognize" the surrounding environment and make decisions based on received information with little or no human intervention. NXP's machine learning development environment helps machine learning grow rapidly in the areas of vision, voice and anomaly detection.

Vision-based machine learning applications provide input to a variety of machine learning algorithms (where neural networks are most popular) through cameras. These applications cover most of the vertical market segments and perform functions such as object recognition, authentication, and people statistics. Voice-activated devices (VADs) are driving the need for edge machine learning to enable wake-up detection, natural language processing, and "voice user interface" applications. Machine learning-based anomaly detection (according to vibration/sound mode) is able to identify impending failures, which in turn significantly reduces equipment downtime and enables rapid changes in Industry 4.0. NXP offers its customers a variety of solutions for integrating machine learning into their applications.

The NXP Machine Learning Development Environment provides free software that allows customers to import their own trained TensorFlow or Caffe models, convert them into optimized AI inference engines, and deploy them in NXP's rich and scalable processing solutions (from MCU to highly integrated) In i.MX and Layerscape processors).

"When using machine learning in embedded applications, you must balance both cost and end-user experience. For example, the AI inference engine can be deployed in our cost-effective MCUs and get enough performance, which is still surprising to many people. "On the other hand, our high-performance cross-border and application processors also have powerful processing power, enabling fast AI reasoning and training in many customer applications," said Markus Levy, head of NXP's artificial intelligence technology. As we continue to expand, we will continue to drive growth in this application area with next-generation processors designed to accelerate machine learning."

Another key requirement for introducing AI/machine learning technology into edge computing applications is the ability to easily and securely deploy and upgrade embedded devices from the cloud. The EdgeScale platform supports secure configuration and management of IoT and edge devices. EdgeScale enables an end-to-end continuous development and delivery experience by integrating AI/machine learning and inference engines in the cloud and automatically deploying integrated modules securely to edge devices.

To meet a wide range of customer needs, NXP has also created a machine learning partner ecosystem that connects customers with technology vendors to accelerate product development through proven machine learning tools, inference engines, solutions and design services. Production and time to market. Members of the ecosystem include Au-Zone Technologies and Pilot.AI.

Au-Zone Technologies offers the industry's first end-to-end embedded machine learning toolkit and the running inference engine DeepView, enabling developers to leverage all of NXP's SoC portfolio (including Arm Cortex-A, Cortex-M core and GPU). Deploy and set up CNN on the fabric.

Pilot.AI has built a framework for implementing a variety of perceptual tasks on a variety of client platforms, from microcontrollers to GPUs, including detection, classification, tracking and identification, as well as data collection/classification tools and pre-training. Model to directly implement the model deployment.


Copyright © Zhuhai MYZR Technology Co.,Ltd All Rights Reserved.