Intel and Philips Accelerate Deep Learning Inference on CPUs in Key Medical Imaging Uses

Using Intel® Xeon® Scalable processors and the OpenVINO™ toolkit, Intel and Philips* tested two healthcare use cases for deep learning inference models: one on X-rays of bones for bone-age-prediction modeling, the other on CT scans of lungs for lung segmentation. In these tests, Intel and Philips achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements.

“Intel Xeon Scalable processors appear to be the right solution for this type of AI workload. Our customers can use their existing hardware to its maximum potential, while still aiming to achieve quality output resolution at exceptional speeds.”
–Vijayananda J., chief architect and fellow, Data Science and AI at Philips HealthSuite Insights

Why It’s Important: Until recently, there was one prominent hardware solution to accelerate deep learning: graphics processing unit (GPUs). By design, GPUs work well with images, but they also have inherent memory constraints that data scientists have had to work around when building some models.

Central processing units (CPUs) – in this case Intel Xeon Scalable processors – don’t have those same memory constraints and can accelerate complex, hybrid workloads, including larger, memory-intensive models typically found in medical imaging. For a large subset of artificial intelligence (AI) workloads, Intel Xeon Scalable processors can better meet data scientists’ needs than GPU-based systems. As Philips found in the two recent tests, this enables the company to offer AI solutions at lower cost to its customers.

AI techniques such as object detection and segmentation can help radiologists identify issues faster and more accurately, which can translate to better prioritization of cases, better outcomes for more patients and reduced costs for hospitals.

Deep learning inference applications typically process workloads in small batches or in a streaming manner, which means they do not exhibit large batch sizes. CPUs are a great fit for low batch or streaming applications. In particular, Intel Xeon Scalable processors offer an affordable, flexible platform for AI models – particularly in conjunction with tools like the OpenVINO toolkit, which can help deploy pre-trained models for efficiency, without sacrificing accuracy.

These tests show that healthcare organizations can implement AI workloads without expensive hardware investments.

What the Results Show: The results for both use cases surpassed expectations. The bone-age-prediction model went from an initial baseline test result of 1.42 images per second to a final tested rate of 267.1 images per second after optimizations – an increase of 188 times. The lung-segmentation model far surpassed the target of 15 images per second by improving from a baseline of 1.9 images per second to 71.7 images per second after optimizations.

What’s Next: Running healthcare deep learning workloads on CPU-based devices offers direct benefits to companies like Philips, because it allows them to offer AI-based services that don’t drive up costs for their end customers. As shown in this test, companies like Philips can offer AI algorithms for download through an online store as a way to increase revenue and differentiate themselves from growing competition.

More Context: Multiple trends are contributing to this shift:

  • As medical image resolution improves, medical image file sizes are growing – many images are 1GB or greater.
  • More healthcare organizations are using deep learning inference to more quickly and accurately review patient images.
  • Organizations are looking for ways to do this without buying expensive new infrastructure.

The Philips tests are just one example of these trends in action. Novartis* is another. And many other Intel customers – not yet publicly announced – are achieving similar results. Learn more about Intel AI technology in healthcare at “Advancing Data-Driven Healthcare Solutions.”

TrustedParts x A