{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,15]],"date-time":"2026-04-15T17:50:01Z","timestamp":1776275401144,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":54,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,10,26]],"date-time":"2022-10-26T00:00:00Z","timestamp":1666742400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,10,26]]},"DOI":"10.1145\/3545948.3545976","type":"proceedings-article","created":{"date-parts":[[2022,10,17]],"date-time":"2022-10-17T11:21:49Z","timestamp":1666005709000},"page":"321-332","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":39,"title":["Transferable Graph Backdoor Attack"],"prefix":"10.1145","author":[{"given":"Shuiqiao","family":"Yang","sequence":"first","affiliation":[{"name":"The University of New South Wales, Australia"}]},{"given":"Bao Gia","family":"Doan","sequence":"additional","affiliation":[{"name":"The Universtity of Adelaide, Australia"}]},{"given":"Paul","family":"Montague","sequence":"additional","affiliation":[{"name":"Defence Science and Technology Group, Australia"}]},{"given":"Olivier","family":"De Vel","sequence":"additional","affiliation":[{"name":"Defence Science and Technology Group, Australia"}]},{"given":"Tamas","family":"Abraham","sequence":"additional","affiliation":[{"name":"Defence Science and Technology Group, Australia"}]},{"given":"Seyit","family":"Camtepe","sequence":"additional","affiliation":[{"name":"CSIRO Data61, Australia"}]},{"given":"Damith C.","family":"Ranasinghe","sequence":"additional","affiliation":[{"name":"The University of Adelaide, Australia"}]},{"given":"Salil S.","family":"Kanhere","sequence":"additional","affiliation":[{"name":"UNSW, Australia"}]}],"member":"320","published-online":{"date-parts":[[2022,10,26]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/3134600.3134606"},{"key":"e_1_3_2_1_2_1","unstructured":"Anirban Chakraborty Manaar Alam Vishal Dey Anupam Chattopadhyay and Debdeep Mukhopadhyay. 2018. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069(2018).  Anirban Chakraborty Manaar Alam Vishal Dey Anupam Chattopadhyay and Debdeep Mukhopadhyay. 2018. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069(2018)."},{"key":"e_1_3_2_1_3_1","unstructured":"Bryant Chen Wilka Carvalho Nathalie Baracaldo Heiko Ludwig Benjamin Edwards Taesung Lee Ian Molloy and Biplav Srivastava. 2018. Detecting backdoor attacks on deep neural networks by activation clustering. arXiv preprint arXiv:1811.03728(2018).  Bryant Chen Wilka Carvalho Nathalie Baracaldo Heiko Ludwig Benjamin Edwards Taesung Lee Ian Molloy and Biplav Srivastava. 2018. Detecting backdoor attacks on deep neural networks by activation clustering. arXiv preprint arXiv:1811.03728(2018)."},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"crossref","unstructured":"Huili Chen Cheng Fu Jishen Zhao and Farinaz Koushanfar. 2019. DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks.. In IJCAI Vol.\u00a02. 8.  Huili Chen Cheng Fu Jishen Zhao and Farinaz Koushanfar. 2019. DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks.. In IJCAI Vol.\u00a02. 8.","DOI":"10.24963\/ijcai.2019\/647"},{"key":"e_1_3_2_1_5_1","volume-title":"Sentinet: Detecting physical attacks against deep learning systems.","author":"Chou Edward","year":"2018","unstructured":"Edward Chou , Florian Tram\u00e8r , Giancarlo Pellegrino , and Dan Boneh . 2018 . Sentinet: Detecting physical attacks against deep learning systems. (2018). Edward Chou, Florian Tram\u00e8r, Giancarlo Pellegrino, and Dan Boneh. 2018. Sentinet: Detecting physical attacks against deep learning systems. (2018)."},{"key":"e_1_3_2_1_6_1","volume-title":"International Conference on Machine Learning. PMLR, 1310\u20131320","author":"Cohen Jeremy","year":"2019","unstructured":"Jeremy Cohen , Elan Rosenfeld , and Zico Kolter . 2019 . Certified adversarial robustness via randomized smoothing . In International Conference on Machine Learning. PMLR, 1310\u20131320 . Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. 2019. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning. PMLR, 1310\u20131320."},{"key":"e_1_3_2_1_7_1","volume-title":"International conference on machine learning. PMLR, 1115\u20131124","author":"Dai Hanjun","year":"2018","unstructured":"Hanjun Dai , Hui Li , Tian Tian , Xin Huang , Lin Wang , Jun Zhu , and Le Song . 2018 . Adversarial attack on graph structured data . In International conference on machine learning. PMLR, 1115\u20131124 . Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. In International conference on machine learning. PMLR, 1115\u20131124."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3427228.3427264"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"crossref","unstructured":"Bao\u00a0Gia Doan Minhui Xue Shiqing Ma Ehsan Abbasnejad and Damith\u00a0C. Ranasinghe. 2021. TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems. (2021).  Bao\u00a0Gia Doan Minhui Xue Shiqing Ma Ehsan Abbasnejad and Damith\u00a0C. Ranasinghe. 2021. TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems. (2021).","DOI":"10.1109\/TIFS.2022.3198857"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0022-2836(03)00628-4"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359789.3359790"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1214\/aoms\/1177706098"},{"key":"e_1_3_2_1_13_1","unstructured":"Ian\u00a0J Goodfellow Jonathon Shlens and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572(2014).  Ian\u00a0J Goodfellow Jonathon Shlens and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572(2014)."},{"key":"e_1_3_2_1_14_1","volume-title":"Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733(2017).","author":"Gu Tianyu","year":"2017","unstructured":"Tianyu Gu , Brendan Dolan-Gavitt , and Siddharth Garg . 2017 . Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733(2017). Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733(2017)."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2909068"},{"key":"e_1_3_2_1_16_1","volume-title":"Inductive representation learning on large graphs. Advances in neural information processing systems 30","author":"Hamilton Will","year":"2017","unstructured":"Will Hamilton , Zhitao Ying , and Jure Leskovec . 2017. Inductive representation learning on large graphs. Advances in neural information processing systems 30 ( 2017 ). Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems 30 (2017)."},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3447556.3447566"},{"key":"e_1_3_2_1_18_1","unstructured":"Thomas\u00a0N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907(2016).  Thomas\u00a0N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907(2016)."},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00044"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01615"},{"key":"e_1_3_2_1_21_1","unstructured":"Cong Liao Haoti Zhong Anna Squicciarini Sencun Zhu and David Miller. 2018. Backdoor embedding in convolutional neural network models via invisible perturbation. arXiv preprint arXiv:1808.10307(2018).  Cong Liao Haoti Zhong Anna Squicciarini Sencun Zhu and David Miller. 2018. Backdoor embedding in convolutional neural network models via invisible perturbation. arXiv preprint arXiv:1808.10307(2018)."},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01234-2_23"},{"key":"e_1_3_2_1_23_1","unstructured":"Yanpei Liu Xinyun Chen Chang Liu and Dawn Song. 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770(2016).  Yanpei Liu Xinyun Chen Chang Liu and Dawn Song. 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770(2016)."},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3319535.3363216"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCD.2017.16"},{"key":"e_1_3_2_1_26_1","unstructured":"Aleksander Madry Aleksandar Makelov Ludwig Schmidt Dimitris Tsipras and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083(2017).  Aleksander Madry Aleksandar Makelov Ludwig Schmidt Dimitris Tsipras and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083(2017)."},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.282"},{"key":"e_1_3_2_1_28_1","volume-title":"Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663(2020).","author":"Morris Christopher","year":"2020","unstructured":"Christopher Morris , Nils\u00a0 M Kriege , Franka Bause , Kristian Kersting , Petra Mutzel , and Marion Neumann . 2020 . Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663(2020). Christopher Morris, Nils\u00a0M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. 2020. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663(2020)."},{"key":"e_1_3_2_1_29_1","unstructured":"Luis Mu\u00f1oz-Gonz\u00e1lez Bjarne Pfitzner Matteo Russo Javier Carnerero-Cano and Emil\u00a0C Lupu. 2019. Poisoning attacks with generative adversarial nets. arXiv preprint arXiv:1906.07773(2019).  Luis Mu\u00f1oz-Gonz\u00e1lez Bjarne Pfitzner Matteo Russo Javier Carnerero-Cano and Emil\u00a0C Lupu. 2019. Poisoning attacks with generative adversarial nets. arXiv preprint arXiv:1906.07773(2019)."},{"key":"e_1_3_2_1_30_1","volume-title":"Twenty-Fourth International Joint Conference on Artificial Intelligence.","author":"Orsini Francesco","year":"2015","unstructured":"Francesco Orsini , Paolo Frasconi , and Luc De\u00a0Raedt . 2015 . Graph invariant kernels . In Twenty-Fourth International Joint Conference on Artificial Intelligence. Francesco Orsini, Paolo Frasconi, and Luc De\u00a0Raedt. 2015. Graph invariant kernels. In Twenty-Fourth International Joint Conference on Artificial Intelligence."},{"key":"e_1_3_2_1_31_1","unstructured":"Nicolas Papernot Patrick McDaniel and Ian Goodfellow. 2016. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277(2016).  Nicolas Papernot Patrick McDaniel and Ian Goodfellow. 2016. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277(2016)."},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/EuroSP.2016.36"},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.5120\/21220-3960"},{"key":"e_1_3_2_1_35_1","unstructured":"Ahmed Salem Rui Wen Michael Backes Shiqing Ma and Yang Zhang. 2020. Dynamic backdoor attacks against machine learning models. arXiv preprint arXiv:2003.03675(2020).  Ahmed Salem Rui Wen Michael Backes Shiqing Ma and Yang Zhang. 2020. Dynamic backdoor attacks against machine learning models. arXiv preprint arXiv:2003.03675(2020)."},{"key":"e_1_3_2_1_36_1","volume-title":"Pang Wei\u00a0W Koh, and Percy\u00a0S Liang","author":"Steinhardt Jacob","year":"2017","unstructured":"Jacob Steinhardt , Pang Wei\u00a0W Koh, and Percy\u00a0S Liang . 2017 . Certified defenses for data poisoning attacks. Advances in neural information processing systems 30 (2017). Jacob Steinhardt, Pang Wei\u00a0W Koh, and Percy\u00a0S Liang. 2017. Certified defenses for data poisoning attacks. Advances in neural information processing systems 30 (2017)."},{"key":"e_1_3_2_1_37_1","unstructured":"Lichao Sun Yingtong Dou Carl Yang Ji Wang Philip\u00a0S Yu Lifang He and Bo Li. 2018. Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528(2018).  Lichao Sun Yingtong Dou Carl Yang Ji Wang Philip\u00a0S Yu Lifang He and Bo Li. 2018. Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528(2018)."},{"key":"e_1_3_2_1_38_1","unstructured":"Petar Veli\u010dkovi\u0107 Guillem Cucurull Arantxa Casanova Adriana Romero Pietro Lio and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903(2017).  Petar Veli\u010dkovi\u0107 Guillem Cucurull Arantxa Casanova Adriana Romero Pietro Lio and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903(2017)."},{"key":"e_1_3_2_1_39_1","volume-title":"ICML 2021 Workshop on Adversarial Machine Learning.","author":"Wan Xingchen","year":"2021","unstructured":"Xingchen Wan , Henry Kenlay , Binxin Ru , Arno Blaas , Michael Osborne , and Xiaowen Dong . 2021 . Attacking Graph Classification via Bayesian Optimisation . In ICML 2021 Workshop on Adversarial Machine Learning. Xingchen Wan, Henry Kenlay, Binxin Ru, Arno Blaas, Michael Osborne, and Xiaowen Dong. 2021. Attacking Graph Classification via Bayesian Optimisation. In ICML 2021 Workshop on Adversarial Machine Learning."},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3319535.3354206"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00031"},{"key":"e_1_3_2_1_42_1","first-page":"5","article-title":"NIST special database 4. Fingerprint Database","volume":"17","author":"Watson I","year":"1992","unstructured":"Craig\u00a0 I Watson and Charles\u00a0 L Wilson . 1992 . NIST special database 4. Fingerprint Database , National Institute of Standards and Technology 17 , 77(1992), 5 . Craig\u00a0I Watson and Charles\u00a0L Wilson. 1992. NIST special database 4. Fingerprint Database, National Institute of Standards and Technology 17, 77(1992), 5.","journal-title":"National Institute of Standards and Technology"},{"key":"e_1_3_2_1_43_1","unstructured":"Huijun Wu Chen Wang Yuriy Tyshetskiy Andrew Docherty Kai Lu and Liming Zhu. 2019. Adversarial examples on graph data: Deep insights into attack and defense. arXiv preprint arXiv:1903.01610(2019).  Huijun Wu Chen Wang Yuriy Tyshetskiy Andrew Docherty Kai Lu and Liming Zhu. 2019. Adversarial examples on graph data: Deep insights into attack and defense. arXiv preprint arXiv:1903.01610(2019)."},{"key":"e_1_3_2_1_44_1","volume-title":"30th {USENIX} Security Symposium ({USENIX} Security 21).","author":"Xi Zhaohan","unstructured":"Zhaohan Xi , Ren Pang , Shouling Ji , and Ting Wang . 2021. Graph backdoor . In 30th {USENIX} Security Symposium ({USENIX} Security 21). Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. 2021. Graph backdoor. In 30th {USENIX} Security Symposium ({USENIX} Security 21)."},{"key":"e_1_3_2_1_45_1","unstructured":"Kaidi Xu Hongge Chen Sijia Liu Pin-Yu Chen Tsui-Wei Weng Mingyi Hong and Xue Lin. 2019. Topology attack and defense for graph neural networks: An optimization perspective. arXiv preprint arXiv:1906.04214(2019).  Kaidi Xu Hongge Chen Sijia Liu Pin-Yu Chen Tsui-Wei Weng Mingyi Hong and Xue Lin. 2019. Topology attack and defense for graph neural networks: An optimization perspective. arXiv preprint arXiv:1906.04214(2019)."},{"key":"e_1_3_2_1_46_1","unstructured":"Keyulu Xu Weihua Hu Jure Leskovec and Stefanie Jegelka. 2018. How powerful are graph neural networks?arXiv preprint arXiv:1810.00826(2018).  Keyulu Xu Weihua Hu Jure Leskovec and Stefanie Jegelka. 2018. How powerful are graph neural networks?arXiv preprint arXiv:1810.00826(2018)."},{"key":"e_1_3_2_1_47_1","unstructured":"Shuiqiao Yang Sunny Verma Borui Cai Jiaojiao Jiang Kun Yu Fang Chen and Shui Yu. 2021. Variational Co-embedding Learning for Attributed Network Clustering. arXiv preprint arXiv:2104.07295(2021).  Shuiqiao Yang Sunny Verma Borui Cai Jiaojiao Jiang Kun Yu Fang Chen and Shui Yu. 2021. Variational Co-embedding Learning for Attributed Network Clustering. arXiv preprint arXiv:2104.07295(2021)."},{"key":"e_1_3_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00765"},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"crossref","unstructured":"Xiao Zang Yi Xie Jie Chen and Bo Yuan. 2020. Graph universal adversarial attacks: A few bad actors ruin graph learning models. arXiv preprint arXiv:2002.04784(2020).  Xiao Zang Yi Xie Jie Chen and Bo Yuan. 2020. Graph universal adversarial attacks: A few bad actors ruin graph learning models. arXiv preprint arXiv:2002.04784(2020).","DOI":"10.24963\/ijcai.2021\/458"},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450569.3463560"},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01445"},{"key":"e_1_3_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.aiopen.2021.01.001"},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220078"},{"key":"e_1_3_2_1_54_1","doi-asserted-by":"crossref","unstructured":"Daniel Z\u00fcgner and Stephan G\u00fcnnemann. 2019. Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412(2019).  Daniel Z\u00fcgner and Stephan G\u00fcnnemann. 2019. Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412(2019).","DOI":"10.24963\/ijcai.2019\/872"}],"event":{"name":"RAID 2022: 25th International Symposium on Research in Attacks, Intrusions and Defenses","location":"Limassol Cyprus","acronym":"RAID 2022"},"container-title":["Proceedings of the 25th International Symposium on Research in Attacks, Intrusions and Defenses"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3545948.3545976","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3545948.3545976","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:30:17Z","timestamp":1750188617000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3545948.3545976"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,26]]},"references-count":54,"alternative-id":["10.1145\/3545948.3545976","10.1145\/3545948"],"URL":"https:\/\/doi.org\/10.1145\/3545948.3545976","relation":{},"subject":[],"published":{"date-parts":[[2022,10,26]]},"assertion":[{"value":"2022-10-26","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}