AI in the Smart Factory with Kit from Lattice
Highly-capable, lower-cost camera systems are now being used to detect the presence of a human in a hazardous zone, or to disable machinery thereby preventing harm to workers. The same techniques can also be used to identify a foreign object in a production line. Production control will find it easier to achieve higher quality levels when they can be 100% sure that the correct components are where they should be, and when predictive maintenance applications are used to proactively detect equipment defects.
While machine vision systems are not new in the industrial environment, their proliferation, the explosion of growth in this sector and the plethora of new applications emerging on a daily basis is due very largely to breakthroughs in AI processing, especially low cost, low power AI inferencing systems that can be applied at the edge – i.e. right beside, or even integrated within, the sensor on the machine itself.
Machine learning typically requires two types of computing workloads, training and inferencing. Systems in training learn a new capability by collecting and analyzing large amounts of existing data. This activity is highly compute-intensive and, therefore, typically conducted in the data center using high-performance hardware. However, the second phase of machine learning, termed ‘inferencing’, applies the system’s capabilities to new data by identifying patterns and performing tasks. In some cases, designers cannot afford to perform inferencing in the data centers because of latency, privacy and cost barriers. Instead they must perform those computational tasks close to the edge. Often, low cost FPGAs are highly suitable for this activity.
In many cases AI-powered machine vision systems are situated right on the individual machine. This is a major advantage in that all detection and processing takes place locally. Consequently latency is reduced to a minimum and cloud connectivity is eliminated. Applications can be designed for a dynamic frame rate consuming only as much as 1 mWs of power. A simply- and quickly-developed, FPGA-powered AI machine vision system can detect and confirm any specific object in a real-time by leveraging an advanced neural network engine that is innately more accurate than traditional algorithms.
Lattice has streamlined the path for machine vision system designers to get started with its ECP5-based Embedded Vision Development Kit, a highly flexible, smart modular solution for embedded vision designers to build a prototyping system quickly, along with the Lattice sensAI stack. With free soft IP cores (Compact CNN accelerators, CNN accelerators), software tools, reference designs, demos, and customized design services, sensAI accelerates integration of on-device sensor data processing and analytics in edge devices.