Using NPU in Newer Intel PCs
A curated collection of resources for getting started with Intel's Neural Processing Unit (Intel AI Boost) for on-device AI development.
About NPUs
Newer Intel PCs ship with a dedicated Neural Processing Unit (NPU), branded as Intel AI Boost. This hardware accelerator is designed for sustained AI workloads like eye detection, background blur, and other inference tasks -- running them efficiently without taxing the CPU or GPU.
The following resources cover how to develop and build AI applications that leverage the NPU using frameworks like OpenVINO, DirectML, and ONNX Runtime.
Reference Resources
How to develop and build your first AI PC app on Intel NPU (Intel AI Boost)
TechGig -- Getting started guide for building AI applications on Intel NPU.
Introducing Neural Processor Unit (NPU) support in DirectML (developer preview)
DirectX Developer Blog -- Microsoft's NPU support via DirectML.
OpenVINO Notebooks -- Windows Setup
GitHub Wiki -- Setting up OpenVINO notebooks on Windows for NPU development.
How to run and develop your AI app on Intel NPU (Intel AI Boost)
Medium / OpenVINO Toolkit -- by Raymond Lo, PhD. Detailed walkthrough using OpenVINO.
Running models using NPU with Copilot+ PC
pkbullock.com -- Practical guide for running AI models on NPU with Copilot+ PCs.
Getting started with Intel NPU
Medium -- by Abhishek Nandy. Introduction to Intel NPU fundamentals.
Code Repository
git clone git@github.com:microsoft/onnxruntime-inference-examples.git
Microsoft's ONNX Runtime inference examples repository contains sample code for running models on various hardware targets including NPU.