Securing Additive Manufacturing Systems from Cyber and Intellectual Property Attacks

Author(s)
Liang, Sizhuang
Editor(s)
Associated Organization(s)
Supplementary to:
Abstract
Additive Manufacturing (AM), also known as 3D printing, refers to a collection of manufacturing processes where materials are joined together layer by layer to make objects directly from 3D models. Due to many advantages of AM, such as rapid prototyping, massive customization, material saving, and flexibility of designs, there is a trend for AM to replace traditional manufacturing processes. However, AM highly relies on computers to work. As AM systems are gaining popularity in many critical industry sectors, there is an increased risk of cyberattacks on AM systems. To protect AM systems from cyberattacks that aim to sabotage the AM systems, Intrusion Detection Systems (IDSs) can be used. In recent years, researchers proposed a series of IDSs that work by leveraging side-channel signals. A side-channel signal is typically a physical signal that is correlated with the state of the AM system, such as the acoustic sound or the electromagnetic wave emitted by a 3D printer in a printing process. Because of the correlation between a side-channel signal and the state of a 3D printer, it is possible to perform intrusion detection by analyzing the side-channel signal. In fact, most existing IDSs leveraging side-channel signals in AM systems function by comparing an observed side-channel signal against a reference side-channel signal. However, we found that these IDSs are not practical due to a lack of synchronization. Many IDSs in the literature do not contain details on how to align two (or more) side-channel signals at their starting moments and their stopping moments. In addition, we found that there is time noise in AM processes. When the same G-code file is executed on the same 3D printer multiple times, the printing processes will have slightly different timing. Because of time noise, a direct comparison between two signals point by point or window by window will not make sense. To overcome this problem, we propose to use dynamic synchronization to find corresponding points between two signals in real time. To demonstrate the necessity of dynamic synchronization, we performed a total of 302 benign printing processes and a total of 200 malicious printing processes with two printers. Our experiment results show that existing IDSs leveraging side-channel signals in AM systems can only achieve an accuracy from 0.50 to 0.88, whereas our IDS with dynamic synchronization can reach an accuracy of 0.99. Other than cyberattacks to sabotage AM systems, there are also cyberattacks to steal intellectual property in AM systems. For example, there are acoustic side-channel attacks on AM systems which can recover the printing path by analyzing the acoustic sound by a printer in a printing process. However, we found that the acoustic side-channel attack is hard to perform due to challenges such as integration drift and non-unique solution. In this thesis, we explore the optical side-channel attack, which is much easier to perform than the acoustic side-channel attack. The optical side-channel signal is basically the video of a printing process. We use a modified deep neural network, ResNet50, to recognize the coordinates of the printhead in each frame in the video. To defend against the optical side-channel attack, we propose the optical noise injection method. We use an optical projector to artificially inject crafted optical noise onto the printing area in an attempt to confuse the attacker and make it harder to recover the printing path. We found that existing noise generation algorithms, such as replaying, random blobs, white noise, and full power, can effortlessly defeat a naive attacker who is not aware of the existence of the injected noise. However, an advanced attacker who knows about the injected noise and incorporates images with injected noise in the training dataset can defeat all of the existing noise generation algorithms. To defend against such an advanced attacker, we propose three novel noise generation algorithms: channel uniformization, state uniformization, and state randomization. Our experiment results show that noise generated via state randomization can successfully defeat the advanced attacker.
Sponsor
Date
2021-11-15
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI