Skip links

Boffins trick AI model into giving up its secrets

Computer scientists from North Carolina State University have devised a way to copy AI models running on Google Edge Tensor Processing Units (TPUs), as used in Google Pixel phones and third-party machine learning accelerators.

The technique, developed by NC State researchers Ashley Kurian, Anuj Dubey, Ferhat Yaman and Aydin Aysu, is a side-channel attack that measures the electromagnetic intensity of AI model use (inference) when running on TPUs, and exploits those measurements to infer model hyperparameters.

Machine learning model hyperparameters refer to values set prior to the training process that affect model training – the learning rate, the batch size, or the pool size. They’re distinct from model parameters – such as weights – which are internal to the model and are learned during training.

An adversary with both can mostly reproduce an AI model at far less cost than incurred during the original training process – something developers spending billions on building AI models might prefer to avoid. There are already a variety of parameter extraction techniques.

“A hyperparameter stealing attack followed by parameter extraction can create a high-fidelity substitute model with the extracted information to mimic the victim model,” the researchers explain in their paper, “TPUXtract: An Exhaustive Hyperparameter Extraction Framework.”

While there have been prior limited hyperparameter attacks, the researchers claim their attack is the first to perform a comprehensive hyperparameter extraction and the first model stealing attack targeting the Google Edge TPU.

“Because we stole the architecture and layer details, we were able to recreate the high-level features of the AI,” explained Aydin Aysu, a co-author of the paper and associate professor at NC State, in a statement. “We then used that information to recreate the functional AI model, or a very close surrogate of that model.”

The attack scenario assumes the adversary has access to the device – a Coral Dev Board with a Google Edge TPU – during inference, and can conduct electromagnetic measurements using Riscure hardware (icWaves, Transceiver, High Sensitivity EM probe) and a PicoScope Oscilloscope. Knowledge of the software deployment environment (TF Lite for Edge TPU) is also assumed. However, the details about Edge TPU’s architecture and instruction set are not required.

The researchers’ approach involves extracting information about each neural network layer sequentially and then feeding extracted hyperparameters for each layer back into the layer extraction framework. This overcomes problems with prior efforts that required an impractical brute force attack against the entire model but yielded only some of the model’s hyperparameters.

According to the researchers, their approach is able to recreate a model with 99.91 percent accuracy. The process – tested on models such as MobileNet V3, Inception V3, and ResNet-50 – takes about three hours per layer. The models cited in the paper range from 28 to 242 layers.

“Our research demonstrates that an adversary can effectively reverse engineer the hyperparameters of a neural network by observing its EM emanations during inference, even in a black box setting,” the authors state in their paper. “The coverage and accuracy of our approach raise significant concerns about the vulnerability of commercial accelerators like the Edge TPU to model stealing in various real-world scenarios.”

Google is aware of the researchers’ findings, and declined to comment on the record. The Register understands from conversations with shy comms folk that one of the reasons the Coral Dev Board was chosen is that it does not implement memory encryption. ®

Source