Skip to main content

Showing 1–50 of 74 results for author: McDaniel, P

Searching in archive cs. Search in all archives.
.
  1. arXiv:2602.10299  [pdf, ps, other

    cs.CR

    The Role of Learning in Attacking Intrusion Detection Systems

    Authors: Kyle Domico, Jean-Charles Noirot Ferrand, Patrick McDaniel

    Abstract: Recent work on network attacks have demonstrated that ML-based network intrusion detection systems (NIDS) can be evaded with adversarial perturbations. However, these attacks rely on complex optimizations that have large computational overheads, making them impractical in many real-world settings. In this paper, we introduce a lightweight adversarial agent that implements strategies (policies) tra… ▽ More

    Submitted 10 February, 2026; originally announced February 2026.

  2. arXiv:2512.03207  [pdf

    cs.CR

    Technical Report: The Need for a (Research) Sandstorm through the Privacy Sandbox

    Authors: Yohan Beugin, Patrick McDaniel

    Abstract: The Privacy Sandbox, launched in 2019, is a series of proposals from Google to reduce ``cross-site and cross-app tracking while helping to keep online content and services free for all''. Over the years, Google implemented, experimented, and deprecated some of these APIs into their own products (Chrome, Android, etc.) which raised concerns about the potential of these mechanisms to fundamentally d… ▽ More

    Submitted 2 December, 2025; originally announced December 2025.

    Comments: Technical report accompanying the research portal Privacy Sandstorm (https://privacysandstorm.github.io) launched after our HotPETs 2024 talk "The Need for a (Research) Sandstorm through the Privacy Sandbox''

  3. arXiv:2511.13641  [pdf, ps, other

    cs.CR

    It's a Feature, Not a Bug: Secure and Auditable State Rollback for Confidential Cloud Applications

    Authors: Quinn Burke, Anjo Vahldiek-Oberwagner, Michael Swift, Patrick McDaniel

    Abstract: Replay and rollback attacks threaten cloud application integrity by reintroducing authentic yet stale data through an untrusted storage interface to compromise application decision-making. Prior security frameworks mitigate these attacks by enforcing forward-only state transitions (state continuity) with hardware-backed mechanisms, but they categorically treat all rollback as malicious and thus pr… ▽ More

    Submitted 17 November, 2025; originally announced November 2025.

  4. LibIHT: A Hardware-Based Approach to Efficient and Evasion-Resistant Dynamic Binary Analysis

    Authors: Changyu Zhao, Yohan Beugin, Jean-Charles Noirot Ferrand, Quinn Burke, Guancheng Li, Patrick McDaniel

    Abstract: Dynamic program analysis is invaluable for malware detection, debugging, and performance profiling. However, software-based instrumentation incurs high overhead and can be evaded by anti-analysis techniques. In this paper, we propose LibIHT, a hardware-assisted tracing framework that leverages on-CPU branch tracing features (Intel Last Branch Record and Branch Trace Store) to efficiently capture p… ▽ More

    Submitted 17 October, 2025; originally announced October 2025.

    Comments: Accepted in Proceedings of the 2025 Workshop on Software Understanding and Reverse Engineering (SURE'25), October 13-17, 2025, Taipei, Taiwan

  5. arXiv:2508.15386  [pdf, ps, other

    cs.CR cs.SE

    A Practical Guideline and Taxonomy to LLVM's Control Flow Integrity

    Authors: Sabine Houy, Bruno Kreyssig, Timothee Riom, Alexandre Bartel, Patrick McDaniel

    Abstract: Memory corruption vulnerabilities remain one of the most severe threats to software security. They often allow attackers to achieve arbitrary code execution by redirecting a vulnerable program's control flow. While Control Flow Integrity (CFI) has gained traction to mitigate this exploitation path, developers are not provided with any direction on how to apply CFI to real-world software. In this w… ▽ More

    Submitted 21 August, 2025; originally announced August 2025.

  6. arXiv:2507.11500  [pdf, ps, other

    cs.CR

    ARMOR: Aligning Secure and Safe Large Language Models via Meticulous Reasoning

    Authors: Zhengyue Zhao, Yingzi Ma, Somesh Jha, Marco Pavone, Patrick McDaniel, Chaowei Xiao

    Abstract: Large Language Models have shown impressive generative capabilities across diverse tasks, but their safety remains a critical concern. Existing post-training alignment methods, such as SFT and RLHF, reduce harmful outputs yet leave LLMs vulnerable to jailbreak attacks, especially advanced optimization-based ones. Recent system-2 approaches enhance safety by adding inference-time reasoning, where m… ▽ More

    Submitted 19 October, 2025; v1 submitted 14 July, 2025; originally announced July 2025.

  7. arXiv:2504.19373  [pdf, ps, other

    cs.CR cs.AI

    Doxing via the Lens: Revealing Location-related Privacy Leakage on Multi-modal Large Reasoning Models

    Authors: Weidi Luo, Tianyu Lu, Qiming Zhang, Xiaogeng Liu, Bin Hu, Yue Zhao, Jieyu Zhao, Song Gao, Patrick McDaniel, Zhen Xiang, Chaowei Xiao

    Abstract: Recent advances in multi-modal large reasoning models (MLRMs) have shown significant ability to interpret complex visual content. While these models enable impressive reasoning capabilities, they also introduce novel and underexplored privacy risks. In this paper, we identify a novel category of privacy leakage in MLRMs: Adversaries can infer sensitive geolocation information, such as a user's hom… ▽ More

    Submitted 3 March, 2026; v1 submitted 27 April, 2025; originally announced April 2025.

    Comments: Camera-ready version. Accepted as a poster at the 14th International Conference on Learning Representations (ICLR 2026). For official ICLR page, see https://iclr.cc/virtual/2026/poster/10006914

  8. arXiv:2504.07041  [pdf, other

    cs.CR

    Efficient Storage Integrity in Adversarial Settings

    Authors: Quinn Burke, Ryan Sheatsley, Yohan Beugin, Eric Pauley, Owen Hines, Michael Swift, Patrick McDaniel

    Abstract: Storage integrity is essential to systems and applications that use untrusted storage (e.g., public clouds, end-user devices). However, known methods for achieving storage integrity either suffer from high (and often prohibitive) overheads or provide weak integrity guarantees. In this work, we demonstrate a hybrid approach to storage integrity that simultaneously reduces overhead while providing s… ▽ More

    Submitted 9 April, 2025; originally announced April 2025.

    Comments: Published in the 2025 IEEE Symposium on Security and Privacy (S&P)

  9. arXiv:2503.14836  [pdf, ps, other

    cs.LG cs.CV

    On the Robustness Tradeoff in Fine-Tuning

    Authors: Kunyang Li, Jean-Charles Noirot Ferrand, Ryan Sheatsley, Blaine Hoak, Yohan Beugin, Eric Pauley, Patrick McDaniel

    Abstract: Fine-tuning has become the standard practice for adapting pre-trained models to downstream tasks. However, the impact on model robustness is not well understood. In this work, we characterize the robustness-accuracy trade-off in fine-tuning. We evaluate the robustness and accuracy of fine-tuned models over 6 benchmark datasets and 7 different fine-tuning strategies. We observe a consistent trade-o… ▽ More

    Submitted 14 July, 2025; v1 submitted 18 March, 2025; originally announced March 2025.

    Comments: Accepted to International Conference on Computer Vision, ICCV 2025

  10. arXiv:2503.01734  [pdf, ps, other

    cs.CR cs.AI

    Adversarial Agents: Black-Box Evasion Attacks with Reinforcement Learning

    Authors: Kyle Domico, Jean-Charles Noirot Ferrand, Ryan Sheatsley, Eric Pauley, Josiah Hanna, Patrick McDaniel

    Abstract: Attacks on machine learning models have been extensively studied through stateless optimization. In this paper, we demonstrate how a reinforcement learning (RL) agent can learn a new class of attack algorithms that generate adversarial samples. Unlike traditional adversarial machine learning (AML) methods that craft adversarial samples independently, our RL-based approach retains and exploits past… ▽ More

    Submitted 19 November, 2025; v1 submitted 3 March, 2025; originally announced March 2025.

  11. arXiv:2502.12377  [pdf, ps, other

    cs.CV

    Alignment and Adversarial Robustness: Are More Human-Like Models More Secure?

    Authors: Blaine Hoak, Kunyang Li, Patrick McDaniel

    Abstract: A small but growing body of work has shown that machine learning models which better align with human vision have also exhibited higher robustness to adversarial examples, raising the question: can human-like perception make models more secure? If true generally, such mechanisms would offer new avenues toward robustness. In this work, we conduct a large-scale empirical analysis to systematically i… ▽ More

    Submitted 14 July, 2025; v1 submitted 17 February, 2025; originally announced February 2025.

    Comments: Accepted to International Workshop on Security and Privacy-Preserving AI/ML (SPAIML) 2025

  12. arXiv:2502.08447  [pdf

    cs.CR

    Deserialization Gadget Chains are not a Pathological Problem in Android:an In-Depth Study of Java Gadget Chains in AOSP

    Authors: Bruno Kreyssig, Timothée Riom, Sabine Houy, Alexandre Bartel, Patrick McDaniel

    Abstract: Inter-app communication is a mandatory and security-critical functionality of operating systems, such as Android. On the application level, Android implements this facility through Intents, which can also transfer non-primitive objects using Java's Serializable API. However, the Serializable API has a long history of deserialization vulnerabilities, specifically deserialization gadget chains. Rese… ▽ More

    Submitted 12 February, 2025; originally announced February 2025.

  13. arXiv:2501.16534  [pdf, ps, other

    cs.CR cs.AI

    Targeting Alignment: Extracting Safety Classifiers of Aligned LLMs

    Authors: Jean-Charles Noirot Ferrand, Yohan Beugin, Eric Pauley, Ryan Sheatsley, Patrick McDaniel

    Abstract: Alignment in large language models (LLMs) is used to enforce guidelines such as safety. Yet, alignment fails in the face of jailbreak attacks that modify inputs to induce unsafe outputs. In this paper, we introduce and evaluate a new technique for jailbreak attacks. We observe that alignment embeds a safety classifier in the LLM responsible for deciding between refusal and compliance, and seek to… ▽ More

    Submitted 17 February, 2026; v1 submitted 27 January, 2025; originally announced January 2025.

    Comments: This work has been accepted for publication at the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). The final version will be available on IEEE Xplore

  14. arXiv:2412.10597  [pdf, other

    cs.CV cs.CR

    Err on the Side of Texture: Texture Bias on Real Data

    Authors: Blaine Hoak, Ryan Sheatsley, Patrick McDaniel

    Abstract: Bias significantly undermines both the accuracy and trustworthiness of machine learning models. To date, one of the strongest biases observed in image classification models is texture bias-where models overly rely on texture information rather than shape information. Yet, existing approaches for measuring and mitigating texture bias have not been able to capture how textures impact model robustnes… ▽ More

    Submitted 10 February, 2025; v1 submitted 13 December, 2024; originally announced December 2024.

    Comments: Accepted to IEEE Secure and Trustworthy Machine Learning (SaTML)

  15. arXiv:2410.05295  [pdf, other

    cs.CR cs.AI cs.LG

    AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs

    Authors: Xiaogeng Liu, Peiran Li, Edward Suh, Yevgeniy Vorobeychik, Zhuoqing Mao, Somesh Jha, Patrick McDaniel, Huan Sun, Bo Li, Chaowei Xiao

    Abstract: In this paper, we propose AutoDAN-Turbo, a black-box jailbreak method that can automatically discover as many jailbreak strategies as possible from scratch, without any human intervention or predefined scopes (e.g., specified candidate strategies), and use them for red-teaming. As a result, AutoDAN-Turbo can significantly outperform baseline methods, achieving a 74.3% higher average attack success… ▽ More

    Submitted 22 April, 2025; v1 submitted 3 October, 2024; originally announced October 2024.

    Comments: ICLR 2025 Spotlight. Project Page: https://autodans.github.io/AutoDAN-Turbo Code: https://github.com/SaFoLab-WISC/AutoDAN-Turbo

  16. arXiv:2409.10297  [pdf, other

    cs.CV cs.AI

    On Synthetic Texture Datasets: Challenges, Creation, and Curation

    Authors: Blaine Hoak, Patrick McDaniel

    Abstract: The influence of textures on machine learning models has been an ongoing investigation, specifically in texture bias/learning, interpretability, and robustness. However, due to the lack of large and diverse texture data available, the findings in these works have been limited, as more comprehensive evaluations have not been feasible. Image generative models are able to provide data creation at sca… ▽ More

    Submitted 9 May, 2025; v1 submitted 16 September, 2024; originally announced September 2024.

  17. arXiv:2408.14646  [pdf, other

    cs.CR

    ParTEETor: A System for Partial Deployments of TEEs within Tor

    Authors: Rachel King, Quinn Burke, Yohan Beugin, Blaine Hoak, Kunyang Li, Eric Pauley, Ryan Sheatsley, Patrick McDaniel

    Abstract: The Tor anonymity network allows users such as political activists and those under repressive governments to protect their privacy when communicating over the internet. At the same time, Tor has been demonstrated to be vulnerable to several classes of deanonymizing attacks that expose user behavior and identities. Prior work has shown that these threats can be mitigated by leveraging trusted execu… ▽ More

    Submitted 26 August, 2024; originally announced August 2024.

  18. arXiv:2405.03830  [pdf, other

    cs.CR

    On Scalable Integrity Checking for Secure Cloud Disks

    Authors: Quinn Burke, Ryan Sheatsley, Rachel King, Owen Hines, Michael Swift, Patrick McDaniel

    Abstract: Merkle hash trees are the standard method to protect the integrity and freshness of stored data. However, hash trees introduce additional compute and I/O costs on the I/O critical path, and prior efforts have not fully characterized these costs. In this paper, we quantify performance overheads of storage-level hash trees in realistic settings. We then design an optimized tree structure called Dyna… ▽ More

    Submitted 29 January, 2025; v1 submitted 6 May, 2024; originally announced May 2024.

    Comments: Published in the 23rd USENIX Conference on File and Storage Technologies (FAST '25)

  19. arXiv:2403.19577  [pdf, other

    cs.CR

    A Public and Reproducible Assessment of the Topics API on Real Data

    Authors: Yohan Beugin, Patrick McDaniel

    Abstract: The Topics API for the web is Google's privacy-enhancing alternative to replace third-party cookies. Results of prior work have led to an ongoing discussion between Google and research communities about the capability of Topics to trade off both utility and privacy. The central point of contention is largely around the realism of the datasets used in these analyses and their reproducibility; resea… ▽ More

    Submitted 15 August, 2024; v1 submitted 28 March, 2024; originally announced March 2024.

    Comments: Accepted at SecWeb 2024: Workshop on Designing Security for the Web ---- Revisions: simulation bug fixed and new Topics classifier (v5), refer to the updated quantitative results in the latest version

  20. arXiv:2403.09543  [pdf, other

    cs.CV cs.LG

    Explorations in Texture Learning

    Authors: Blaine Hoak, Patrick McDaniel

    Abstract: In this work, we investigate \textit{texture learning}: the identification of textures learned by object classification models, and the extent to which they rely on these textures. We build texture-object associations that uncover new insights about the relationships between texture and object classes in CNNs and find three classes of results: associations that are strong and expected, strong and… ▽ More

    Submitted 14 March, 2024; originally announced March 2024.

    Comments: Accepted to ICLR 2024, Tiny Papers Track

  21. arXiv:2402.18649  [pdf, other

    cs.CR cs.AI

    A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems

    Authors: Fangzhou Wu, Ning Zhang, Somesh Jha, Patrick McDaniel, Chaowei Xiao

    Abstract: Large Language Model (LLM) systems are inherently compositional, with individual LLM serving as the core foundation with additional layers of objects such as plugins, sandbox, and so on. Along with the great potential, there are also increasing concerns over the security of such probabilistic intelligent systems. However, existing studies on LLM security often focus on individual LLM, but without… ▽ More

    Submitted 28 February, 2024; originally announced February 2024.

  22. arXiv:2402.14968  [pdf, other

    cs.CR cs.CL

    Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment

    Authors: Jiongxiao Wang, Jiazhao Li, Yiquan Li, Xiangyu Qi, Junjie Hu, Yixuan Li, Patrick McDaniel, Muhao Chen, Bo Li, Chaowei Xiao

    Abstract: Despite the general capabilities of Large Language Models (LLM), these models still request fine-tuning or adaptation with customized data when meeting specific business demands. However, this process inevitably introduces new threats, particularly against the Fine-tuning based Jailbreak Attack (FJAttack) under the setting of Language-Model-as-a-Service (LMaaS), where the model's safety has been s… ▽ More

    Submitted 20 June, 2024; v1 submitted 22 February, 2024; originally announced February 2024.

  23. Characterizing the Modification Space of Signature IDS Rules

    Authors: Ryan Guide, Eric Pauley, Yohan Beugin, Ryan Sheatsley, Patrick McDaniel

    Abstract: Signature-based Intrusion Detection Systems (SIDSs) are traditionally used to detect malicious activity in networks. A notable example of such a system is Snort, which compares network traffic against a series of rules that match known exploits. Current SIDS rules are designed to minimize the amount of legitimate traffic flagged incorrectly, reducing the burden on network administrators. However,… ▽ More

    Submitted 14 February, 2024; originally announced February 2024.

    Comments: Published in: MILCOM 2023 - 2023 IEEE Military Communications Conference (MILCOM)

  24. arXiv:2310.11597  [pdf, other

    cs.CR cs.AI

    The Efficacy of Transformer-based Adversarial Attacks in Security Domains

    Authors: Kunyang Li, Kyle Domico, Jean-Charles Noirot Ferrand, Patrick McDaniel

    Abstract: Today, the security of many domains rely on the use of Machine Learning to detect threats, identify vulnerabilities, and safeguard systems from attacks. Recently, transformer architectures have improved the state-of-the-art performance on a wide range of tasks such as malware detection and network intrusion detection. But, before abandoning current approaches to transformers, it is crucial to unde… ▽ More

    Submitted 17 October, 2023; originally announced October 2023.

    Comments: Accepted to IEEE Military Communications Conference (MILCOM), AI for Cyber Workshop, 2023

  25. arXiv:2309.06263  [pdf, other

    cs.CR

    Systematic Evaluation of Geolocation Privacy Mechanisms

    Authors: Alban Héon, Ryan Sheatsley, Quinn Burke, Blaine Hoak, Eric Pauley, Yohan Beugin, Patrick McDaniel

    Abstract: Location data privacy has become a serious concern for users as Location Based Services (LBSs) have become an important part of their life. It is possible for malicious parties having access to geolocation data to learn sensitive information about the user such as religion or political views. Location Privacy Preserving Mechanisms (LPPMs) have been proposed by previous works to ensure the privacy… ▽ More

    Submitted 12 September, 2023; originally announced September 2023.

    Comments: M.S. Thesis (https://etda.libraries.psu.edu/catalog/25677abh5960)

  26. arXiv:2308.00623  [pdf

    cs.CR

    Secure and Trustworthy Computing 2.0 Vision Statement

    Authors: Patrick McDaniel, Farinaz Koushanfar

    Abstract: The Secure and Trustworthy Computing (SaTC) program within the National Science Foundation (NSF) program serves as the primary instrument for creating novel fundamental science in security and privacy in the United States with broad impacts that influence the world. The program funds research in a vast array of research topics that span technology, theory, policy, law, and society. In the Spring o… ▽ More

    Submitted 1 August, 2023; originally announced August 2023.

  27. arXiv:2307.11993  [pdf, other

    cs.CR cs.CY cs.DC cs.OS eess.SY

    Verifiable Sustainability in Data Centers

    Authors: Syed Rafiul Hussain, Patrick McDaniel, Anshul Gandhi, Kanad Ghose, Kartik Gopalan, Dongyoon Lee, Yu David Liu, Zhenhua Liu, Shuai Mu, Erez Zadok

    Abstract: Data centers have significant energy needs, both embodied and operational, affecting sustainability adversely. The current techniques and tools for collecting, aggregating, and reporting verifiable sustainability data are vulnerable to cyberattacks and misuse, requiring new security and privacy-preserving solutions. This paper outlines security challenges and research directions for addressing the… ▽ More

    Submitted 12 January, 2024; v1 submitted 22 July, 2023; originally announced July 2023.

  28. arXiv:2306.03825  [pdf, other

    cs.CR

    Interest-disclosing Mechanisms for Advertising are Privacy-Exposing (not Preserving)

    Authors: Yohan Beugin, Patrick McDaniel

    Abstract: Today, targeted online advertising relies on unique identifiers assigned to users through third-party cookies--a practice at odds with user privacy. While the web and advertising communities have proposed solutions that we refer to as interest-disclosing mechanisms, including Google's Topics API, an independent analysis of these proposals in realistic scenarios has yet to be performed. In this pap… ▽ More

    Submitted 8 September, 2023; v1 submitted 6 June, 2023; originally announced June 2023.

    Comments: PoPETS (Proceedings on Privacy Enhancing Technologies Symposium) 2024

  29. arXiv:2305.18639  [pdf, other

    cs.CR cs.OS

    Securing Cloud File Systems with Trusted Execution

    Authors: Quinn Burke, Yohan Beugin, Blaine Hoak, Rachel King, Eric Pauley, Ryan Sheatsley, Mingli Yu, Ting He, Thomas La Porta, Patrick McDaniel

    Abstract: Cloud file systems offer organizations a scalable and reliable file storage solution. However, cloud file systems have become prime targets for adversaries, and traditional designs are not equipped to protect organizations against the myriad of attacks that may be initiated by a malicious cloud provider, co-tenant, or end-client. Recently proposed designs leveraging cryptographic techniques and tr… ▽ More

    Submitted 2 October, 2024; v1 submitted 29 May, 2023; originally announced May 2023.

  30. arXiv:2210.14999  [pdf, other

    cs.CR

    Secure IP Address Allocation at Cloud Scale

    Authors: Eric Pauley, Kyle Domico, Blaine Hoak, Ryan Sheatsley, Quinn Burke, Yohan Beugin, Engin Kirda, Patrick McDaniel

    Abstract: Public clouds necessitate dynamic resource allocation and sharing. However, the dynamic allocation of IP addresses can be abused by adversaries to source malicious traffic, bypass rate limiting systems, and even capture traffic intended for other cloud tenants. As a result, both the cloud provider and their customers are put at risk, and defending against these threats requires a rigorous analysis… ▽ More

    Submitted 10 September, 2024; v1 submitted 26 October, 2022; originally announced October 2022.

    Comments: Replaced with version to appear in 2025 Network and Distributed Systems Security (NDSS) Symposium

  31. arXiv:2209.04521  [pdf, other

    cs.CR cs.LG

    The Space of Adversarial Strategies

    Authors: Ryan Sheatsley, Blaine Hoak, Eric Pauley, Patrick McDaniel

    Abstract: Adversarial examples, inputs designed to induce worst-case behavior in machine learning models, have been extensively studied over the past decade. Yet, our understanding of this phenomenon stems from a rather fragmented pool of knowledge; at present, there are a handful of attacks, each with disparate assumptions in threat models and incomparable definitions of optimality. In this paper, we propo… ▽ More

    Submitted 6 September, 2023; v1 submitted 9 September, 2022; originally announced September 2022.

    Comments: Accepted to the 32nd USENIX Security Symposium

  32. arXiv:2208.09776  [pdf, other

    cs.CR

    Privacy-Preserving Protocols for Smart Cameras and Other IoT Devices

    Authors: Yohan Beugin, Quinn Burke, Blaine Hoak, Ryan Sheatsley, Eric Pauley, Gang Tan, Syed Rafiul Hussain, Patrick McDaniel

    Abstract: Millions of consumers depend on smart camera systems to remotely monitor their homes and businesses. However, the architecture and design of popular commercial systems require users to relinquish control of their data to untrusted third parties, such as service providers (e.g., the cloud). Third parties therefore can (and in some instances have) access the video footage without the users' knowledg… ▽ More

    Submitted 20 August, 2022; originally announced August 2022.

    Comments: Extension of arXiv:2201.09338

  33. arXiv:2205.00566  [pdf, other

    cs.CR cs.AI

    Adversarial Plannning

    Authors: Valentin Vie, Ryan Sheatsley, Sophia Beyda, Sushrut Shringarputale, Kevin Chan, Trent Jaeger, Patrick McDaniel

    Abstract: Planning algorithms are used in computational systems to direct autonomous behavior. In a canonical application, for example, planning for autonomous vehicles is used to automate the static or continuous planning towards performance, resource management, or functional goals (e.g., arriving at the destination, managing fuel fuel consumption). Existing planning algorithms assume non-adversarial sett… ▽ More

    Submitted 1 May, 2022; originally announced May 2022.

  34. arXiv:2204.05780  [pdf, other

    cs.LG astro-ph.EP astro-ph.SR

    A Machine Learning and Computer Vision Approach to Geomagnetic Storm Forecasting

    Authors: Kyle Domico, Ryan Sheatsley, Yohan Beugin, Quinn Burke, Patrick McDaniel

    Abstract: Geomagnetic storms, disturbances of Earth's magnetosphere caused by masses of charged particles being emitted from the Sun, are an uncontrollable threat to modern technology. Notably, they have the potential to damage satellites and cause instability in power grids on Earth, among other disasters. They result from high sun activity, which are induced from cool areas on the Sun known as sunspots. F… ▽ More

    Submitted 4 April, 2022; originally announced April 2022.

    Comments: Presented at ML-Helio 2022

  35. Measuring and Mitigating the Risk of IP Reuse on Public Clouds

    Authors: Eric Pauley, Ryan Sheatsley, Blaine Hoak, Quinn Burke, Yohan Beugin, Patrick McDaniel

    Abstract: Public clouds provide scalable and cost-efficient computing through resource sharing. However, moving from traditional on-premises service management to clouds introduces new challenges; failure to correctly provision, maintain, or decommission elastic services can lead to functional failure and vulnerability to attack. In this paper, we explore a broad class of attacks on clouds which we refer to… ▽ More

    Submitted 11 April, 2022; originally announced April 2022.

  36. arXiv:2203.06694  [pdf, other

    cs.CR

    Generating Practical Adversarial Network Traffic Flows Using NIDSGAN

    Authors: Bolor-Erdene Zolbayar, Ryan Sheatsley, Patrick McDaniel, Michael J. Weisman, Sencun Zhu, Shitong Zhu, Srikanth Krishnamurthy

    Abstract: Network intrusion detection systems (NIDS) are an essential defense for computer networks and the hosts within them. Machine learning (ML) nowadays predominantly serves as the basis for NIDS decision making, where models are tuned to reduce false alarms, increase detection rates, and detect known and unknown attacks. At the same time, ML models have been found to be vulnerable to adversarial examp… ▽ More

    Submitted 13 March, 2022; originally announced March 2022.

  37. arXiv:2202.10387  [pdf, other

    cs.LG cs.CR

    Improving Radioactive Material Localization by Leveraging Cyber-Security Model Optimizations

    Authors: Ryan Sheatsley, Matthew Durbin, Azaree Lintereur, Patrick McDaniel

    Abstract: One of the principal uses of physical-space sensors in public safety applications is the detection of unsafe conditions (e.g., release of poisonous gases, weapons in airports, tainted food). However, current detection methods in these applications are often costly, slow to use, and can be inaccurate in complex, changing, or new environments. In this paper, we explore how machine learning methods u… ▽ More

    Submitted 21 February, 2022; originally announced February 2022.

    Comments: Accepted to IEEE Sensors Journal

  38. HoneyModels: Machine Learning Honeypots

    Authors: Ahmed Abdou, Ryan Sheatsley, Yohan Beugin, Tyler Shipp, Patrick McDaniel

    Abstract: Machine Learning is becoming a pivotal aspect of many systems today, offering newfound performance on classification and prediction tasks, but this rapid integration also comes with new unforeseen vulnerabilities. To harden these systems the ever-growing field of Adversarial Machine Learning has proposed new attack and defense mechanisms. However, a great asymmetry exists as these defensive method… ▽ More

    Submitted 21 February, 2022; originally announced February 2022.

    Comments: Published in: MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM)

  39. arXiv:2201.09338  [pdf, other

    cs.CR

    Building a Privacy-Preserving Smart Camera System

    Authors: Yohan Beugin, Quinn Burke, Blaine Hoak, Ryan Sheatsley, Eric Pauley, Gang Tan, Syed Rafiul Hussain, Patrick McDaniel

    Abstract: Millions of consumers depend on smart camera systems to remotely monitor their homes and businesses. However, the architecture and design of popular commercial systems require users to relinquish control of their data to untrusted third parties, such as service providers (e.g., the cloud). Third parties therefore can (and in some instances have) access the video footage without the users' knowledg… ▽ More

    Submitted 23 January, 2022; originally announced January 2022.

    Comments: Accepted to PETS (Privacy Enhancing Technologies Symposium) 2022

    Journal ref: PoPETS (Proceedings on Privacy Enhancing Technologies Symposium) 2022

  40. arXiv:2105.08619  [pdf, other

    cs.CR cs.LG cs.LO

    On the Robustness of Domain Constraints

    Authors: Ryan Sheatsley, Blaine Hoak, Eric Pauley, Yohan Beugin, Michael J. Weisman, Patrick McDaniel

    Abstract: Machine learning is vulnerable to adversarial examples-inputs designed to cause models to perform poorly. However, it is unclear if adversarial examples represent realistic inputs in the modeled domains. Diverse domains such as networks and phishing have domain constraints-complex relationships between features that an adversary must satisfy for an attack to be realized (in addition to any adversa… ▽ More

    Submitted 7 November, 2021; v1 submitted 18 May, 2021; originally announced May 2021.

    Comments: Accepted to the 28th ACM Conference on Computer and Communications Security. Seoul, South Korea

  41. arXiv:2011.01183  [pdf, other

    cs.CR cs.LG

    Adversarial Examples in Constrained Domains

    Authors: Ryan Sheatsley, Nicolas Papernot, Michael Weisman, Gunjan Verma, Patrick McDaniel

    Abstract: Machine learning algorithms have been shown to be vulnerable to adversarial manipulation through systematic modification of inputs (e.g., adversarial examples) in domains such as image recognition. Under the default threat model, the adversary exploits the unconstrained nature of images; each feature (pixel) is fully under control of the adversary. However, it is not clear how these attacks transl… ▽ More

    Submitted 9 September, 2022; v1 submitted 2 November, 2020; originally announced November 2020.

    Comments: Accepted to IOS Press Journal of Computer Security

  42. arXiv:2009.10021  [pdf, other

    cs.NI

    MLSNet: A Policy Complying Multilevel Security Framework for Software Defined Networking

    Authors: Stefan Achleitner, Quinn Burke, Patrick McDaniel, Trent Jaeger, Thomas La Porta, Srikanth Krishnamurthy

    Abstract: Ensuring that information flowing through a network is secure from manipulation and eavesdropping by unauthorized parties is an important task for network administrators. Many cyber attacks rely on a lack of network-level information flow controls to successfully compromise a victim network. Once an adversary exploits an initial entry point, they can eavesdrop and move laterally within the network… ▽ More

    Submitted 21 September, 2020; originally announced September 2020.

    Report number: INSR-500-TR-0500-2019

  43. arXiv:2002.07641  [pdf, other

    cs.SE cs.PF

    IoTRepair: Systematically Addressing Device Faults in Commodity IoT (Extended Paper)

    Authors: Michael Norris, Berkay Celik, Patrick McDaniel, Gang Tan, Prasanna Venkatesh, Shulin Zhao, Anand Sivasubramaniam

    Abstract: IoT devices are decentralized and deployed in un-stable environments, which causes them to be prone to various kinds of faults, such as device failure and network disruption. Yet, current IoT platforms require programmers to handle faults manually, a complex and error-prone task. In this paper, we present IoTRepair, a fault-handling system for IoT that (1)integrates a fault identification module t… ▽ More

    Submitted 17 February, 2020; originally announced February 2020.

  44. arXiv:1911.10461  [pdf, other

    cs.CR cs.LG

    Real-time Analysis of Privacy-(un)aware IoT Applications

    Authors: Leonardo Babun, Z. Berkay Celik, Patrick McDaniel, A. Selcuk Uluagac

    Abstract: Users trust IoT apps to control and automate their smart devices. These apps necessarily have access to sensitive data to implement their functionality. However, users lack visibility into how their sensitive data is used (or leaked), and they often blindly trust the app developers. In this paper, we present IoTWatcH, a novel dynamic analysis tool that uncovers the privacy risks of IoT apps in rea… ▽ More

    Submitted 24 November, 2019; originally announced November 2019.

  45. arXiv:1911.10186  [pdf, other

    cs.CR

    KRATOS: Multi-User Multi-Device-Aware Access Control System for the Smart Home

    Authors: Amit Kumar Sikder, Leonardo Babun, Z. Berkay Celik, Abbas Acar, Hidayet Aksu, Patrick McDaniel, Engin Kirda, A. Selcuk Uluagac

    Abstract: In a smart home system, multiple users have access to multiple devices, typically through a dedicated app installed on a mobile device. Traditional access control mechanisms consider one unique trusted user that controls the access to the devices. However, multi-user multi-device smart home settings pose fundamentally different challenges to traditional single-user systems. For instance, in a mult… ▽ More

    Submitted 2 June, 2020; v1 submitted 22 November, 2019; originally announced November 2019.

    Comments: Accepted in the 13th ACM Conference on Security and Privacy in Wireless and Mobile Networks (ACM WiSec 2020)

  46. arXiv:1909.00056  [pdf, ps, other

    cs.CY cs.CR stat.ML

    How Relevant is the Turing Test in the Age of Sophisbots?

    Authors: Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot

    Abstract: Popular culture has contemplated societies of thinking machines for generations, envisioning futures from utopian to dystopian. These futures are, arguably, here now-we find ourselves at the doorstep of technology that can at least simulate the appearance of thinking, acting, and feeling. The real question is: now what?

    Submitted 30 August, 2019; originally announced September 2019.

  47. arXiv:1812.02978  [pdf, other

    cs.SI

    More or Less? Predict the Social Influence of Malicious URLs on Social Media

    Authors: Chun-Ming Lai, Xiaoyun Wang, Jon W. Chapman, Yu-Cheng Lin, Yu-Chang Ho, S. Felix Wu, Patrick McDaniel, Hasan Cam

    Abstract: Users of Online Social Networks (OSNs) interact with each other more than ever. In the context of a public discussion group, people receive, read, and write comments in response to articles and postings. In the absence of access control mechanisms, OSNs are a great environment for attackers to influence others, from spreading phishing URLs, to posting fake news. Moreover, OSN user behavior can be… ▽ More

    Submitted 7 December, 2018; originally announced December 2018.

    Comments: 10 pages, 6 figures

  48. IoTSan: Fortifying the Safety of IoT Systems

    Authors: Dang Tu Nguyen, Chengyu Song, Zhiyun Qian, Srikanth V. Krishnamurthy, Edward J. M. Colbert, Patrick McDaniel

    Abstract: Today's IoT systems include event-driven smart applications (apps) that interact with sensors and actuators. A problem specific to IoT systems is that buggy apps, unforeseen bad app interactions, or device/communication failures, can cause unsafe and dangerous physical states. Detecting flaws that lead to such states, requires a holistic view of installed apps, component devices, their configurati… ▽ More

    Submitted 27 October, 2018; v1 submitted 22 October, 2018; originally announced October 2018.

    Comments: Proc. of the 14th ACM CoNEXT, 2018

  49. arXiv:1809.06962  [pdf, other

    cs.CR cs.PL

    Program Analysis of Commodity IoT Applications for Security and Privacy: Challenges and Opportunities

    Authors: Z. Berkay Celik, Earlence Fernandes, Eric Pauley, Gang Tan, Patrick McDaniel

    Abstract: Recent advances in Internet of Things (IoT) have enabled myriad domains such as smart homes, personal monitoring devices, and enhanced manufacturing. IoT is now pervasive---new applications are being used in nearly every conceivable environment, which leads to the adoption of device-based interaction and automation. However, IoT has also raised issues about the security and privacy of these digita… ▽ More

    Submitted 24 December, 2018; v1 submitted 18 September, 2018; originally announced September 2018.

    Comments: syntax and grammar error are fixed, and IoT platforms are updated to match with the submission

  50. arXiv:1808.05579  [pdf, other

    cs.CR cs.HC cs.OS

    Regulating Access to System Sensors in Cooperating Programs

    Authors: Giuseppe Petracca, Jens Grossklags, Patrick McDaniel, Trent Jaeger

    Abstract: Modern operating systems such as Android, iOS, Windows Phone, and Chrome OS support a cooperating program abstraction. Instead of placing all functionality into a single program, programs cooperate to complete tasks requested by users. However, untrusted programs may exploit interactions with other programs to obtain unauthorized access to system sensors either directly or through privileged servi… ▽ More

    Submitted 2 August, 2018; originally announced August 2018.