Microsoft Embraces DeepSeek R1 For Copilot+ PCs Despite Ongoing Investigation
Despite mounting questions surrounding DeepSeek’s R1 model—particularly Microsoft’s own probe into whether the AI was trained on unauthorized OpenAI data—Microsoft has confirmed that a specialized version of DeepSeek R1 will be fully supported on Copilot+ PCs. The company aims to help developers by offering an optimized variant, DeepSeek-R1-Distill-Qwen-1.5B, running locally on Windows 11 systems. While there is no public update on whether or not Microsoft’s internal investigation will result in further actions, the tech giant appears to be welcoming DeepSeek’s cost-effective AI to its ecosystem for now.
According to the recent Windows blog post, Microsoft’s plan includes:
Initial Distilled Model: The first release, labeled DeepSeek-R1-Distill-Qwen-1.5B, is especially optimized for devices equipped with NPUs (Neural Processing Units), including Qualcomm’s Snapdragon X chipsets and, later, Intel’s Core Ultra 200V.
Future Expansions: Variants at the 7B and 14B parameter scales will follow, allowing a more powerful, on-device AI experience on Copilot+ PCs.
Deployment Process: Developers can download the model through the AI Toolkit VS Code extension by selecting it from AI Toolkit’s model catalog, which taps into Azure AI Foundry. Once installed, they can load “deepseek_r1_1_5” in the Playground environment and promptly start prompting the AI.
NPU Efficiency: Microsoft has employed further optimizations—converting the model to ONNX QDQ format and refining its memory usage—to ensure the R1 model can run efficiently on local hardware without relying on cloud inference.
This move indicates a measure of trust in DeepSeek’s technology, even as Microsoft, alongside OpenAI, investigates potential unauthorized usage of OpenAI data. With DeepSeek’s meteoric rise and the R1 model’s swift popularity, the Redmond-based company appears interested in securing developer interest and offering advanced local AI capabilities despite potential legal uncertainties.
Do you see local, NPU-optimized AI models like DeepSeek R1 as the future of on-device development, or will the industry continue to favor cloud-based solutions? Share your views on how this might impact AI deployment in the coming months.