TPUXtract Hacks Google's Edge TPU, Copies AI 99.9%

This approach allows TPUXtract to work even on previously unseen AI model architectures, a significant advancement over previous model extraction techniques. The full details of their research are published in the Conference on Cryptographic Hardware and Embedded Systems.
This research marks a significant leap forward in side-channel attacks on AI hardware. Previous attempts, dating back to 2020, had limited success, often requiring prior knowledge of the target model’s architecture. TPUXtract overcomes this limitation, posing a more significant threat to the security of deployed AI models.
In a landmark study that underscores the growing importance of AI hardware security, researchers at North Carolina State University have demonstrated a novel and highly effective method for extracting AI models directly from hardware devices. This new technique, dubbed TPUXtract, achieves an unprecedented 99.91% accuracy in replicating AI models deployed on Google’s Edge Tensor Processing Unit (Edge TPU), a specialized chip widely used in edge computing applications.
This breakthrough represents a significant escalation in the ongoing “arms race” between AI developers and those seeking to exploit vulnerabilities in AI systems. The findings were initially reported by NC State News.
“AI models are valuable intellectual property,” explains Aydin Aysu, co-author of the research paper and associate professor at NC State. “Developing these models requires significant investment in computing resources and expertise. Protecting them from theft is crucial, not only to safeguard that investment but also to prevent malicious actors from reverse-engineering the models to discover weaknesses or create adversarial attacks.”
“AI models are valuable intellectual property,” explains Aydin Aysu, co-author of the research paper and associate professor at NC State. “Developing these models requires significant investment in computing resources and expertise. Protecting them from theft is crucial, not only to safeguard that investment but also to prevent malicious actors from reverse-engineering the models to discover weaknesses or create adversarial attacks.”
Unlike traditional hacking methods that target software vulnerabilities, TPUXtract exploits electromagnetic side-channel emissions from the Edge TPU during model execution. By placing an electromagnetic probe near the chip, researchers can capture these emissions, which contain information about the model’s architecture. The process involves several key steps:
- Signal Capture: The electromagnetic probe captures the subtle electromagnetic signals emanating from the TPU.
- Layer-by-Layer Analysis: The captured signals are analyzed to decompose the model’s architecture into individual layers.
- Template Matching: The electromagnetic “signature” of each layer is compared against a database of 5,000 known layer patterns.
- Progressive Reconstruction: By matching the captured signatures to known patterns, the researchers can progressively reconstruct the entire model architecture, layer by layer.
“Rather than attempting to recreate the entire electromagnetic signature of the model at once, which would be computationally prohibitive,” explains Ashley Kurian, the paper’s first author and a Ph.D. student at NC State, “we break it down layer by layer. Our database of known layer signatures allows us to efficiently identify and reconstruct the model’s structure.”
This approach allows TPUXtract to work even on previously unseen AI model architectures, a significant advancement over previous model extraction techniques. The full details of their research are published in the Conference on Cryptographic Hardware and Embedded Systems.
This research marks a significant leap forward in side-channel attacks on AI hardware. Previous attempts, dating back to 2020, had limited success, often requiring prior knowledge of the target model’s architecture. TPUXtract overcomes this limitation, posing a more significant threat to the security of deployed AI models.
Read More From AI Buzz

Perplexity pplx-embed: SOTA Open-Source Models for RAG
Perplexity AI has released pplx-embed, a new suite of state-of-the-art multilingual embedding models, making a significant contribution to the open-source community and revealing a key aspect of its corporate strategy. This Perplexity pplx-embed open source release, built on the Qwen3 architecture and distributed under a permissive MIT License, provides developers with a powerful new tool […]

New AI Agent Benchmark: LangGraph vs CrewAI for Production
A comprehensive new benchmark analysis of leading AI agent frameworks has crystallized a fundamental challenge for developers: choosing between the rapid development speed ideal for prototyping and the high-consistency output required for production. The data-driven study by Lukasz Grochal evaluates prominent tools like LangGraph, CrewAI, and Microsoft’s new Agent Framework, revealing stark tradeoffs in performance, […]
