What a lovely hat

Is it made out of tin foil?

Papers updated in last 31 days (Page 3 of 291 results)

Last updated:  2025-12-03
PQCUARK: A Scalar RISC-V ISA Extension for ML-KEM and ML-DSA
Xavier Carril, Alicia Manuel Pasoot, Emanuele Parisi, Carlos Andrés Lara-Niño, Oriol Farràs, and Miquel Moretó
Recent advances in quantum computing pose a threat to the security of digital communications, as large-scale quantum machines can break commonly used cryptographic algorithms, such as RSA and ECC. To mitigate this risk, post-quantum cryptography (PQC) schemes are being standardized, with recent NIST recommendations selecting two lattice-based algorithms: ML-KEM for key encapsulation and ML-DSA for digital signatures. Two computationally intensive kernels dominate the execution of these schemes: the Number-Theoretic Transform (NTT) for polynomial multiplication and the Keccak-f1600 permutation function for polynomial sampling and hashing. This paper presents PQCUARK, a scalar RISC-V ISA extension that accelerates these key operations. PQCUARK integrates two novel accelerators within the core pipeline: (i) a packed SIMD butterfly unit capable of performing NTT butterfly operations on 2×32bit or 4×16bit polynomial coefficients, and (ii) a permutation engine that delivers two Keccak rounds per cycle, hosting a private state and a direct interface to the core Load Store Unit, eliminating the need for a custom register file interface. We have integrated PQCUARK into an RV64 core and deployed it on an FPGA. Experimental results demonstrate that PQCUARK provides up to 10.1× speedup over the NIST baselines and 2.3× over the optimized software, and it outperforms similar state-of-the-art approaches between 1.4-12.3× in performance. ASIC synthesis in GF22-FDSOI technology shows a moderate core area increase of 8% at 1.2 GHz, with PQCUARK units being outside the critical path.
Last updated:  2025-12-03
Mobius: Enabling Byzantine-Resilient Single Secret Leader Election with Uniquely Verifiable State
Hanyue Dou, Peifang Ni, Yingzi Gao, and Jing Xu
Single Secret Leader Election (SSLE) protocol facilitates the election of a single leader per round among a group of registered nodes while ensuring unpredictability. Ethereum has identified SSLE as an essential component in its development roadmap and has adopted it as a potential solution to counteract potential attacks. However, we identify a new form of attack termed the state uniqueness attack that is caused by malicious leaders proposing multiple publicly verifiable states. This attack undermines the property of uniqueness in subsequent leader elections and, with high probability, leads to violations of fundamental security properties of the over-layer protocol such as liveness. The vulnerability stems inherently from the designs reducing the uniqueness guarantee to a unique state per election, and can be generalized to the existing SSLE constructions. We further quantify the severity of this attack based on theoretical analysis and real-world executions on Ethereum, highlighting the critical challenges in designing provably secure SSLE protocols. To address the state uniqueness attack while ensuring both security and practical performance, we present a universal SSLE protocol called Mobius that does not rely on extra trust assumptions. Specifically, Mobius prevents the generation of multiple verifiable states for each election and achieves a unique state across consecutive executions through an innovative approximately-unique randomization mechanism. In addition to providing a comprehensive security analysis in the Universal Composability framework, we develop a proof-of-concept implementation of Mobius, and conduct extensive experiments to evaluate the security and overhead. The experimental results show that Mobius exhibits enhanced security while significantly reducing communication complexity throughout the protocol execution, achieving over 80% reduction in the registration phase.
Last updated:  2025-12-03
On the (Un)biasability of Existing Verifiable Random Functions
Davide Carnemolla, Dario Catalano, Valentina Frasca, and Emanuele Giunta
Verifiable Random Functions (VRFs) play a fundamental role in modern blockchain designs because of their applications in leader election protocols. In such contexts, however, the original definition by Micali, Rabin and Vadhan (FOCS 99), falls short at guaranteeing fairness when keys are sampled maliciously. The elegant notion of unbiasable VRF, recently proposed by Giunta and Stewart (Eurocrypt 24), addresses these concerns while remaining simple to state and easy to realize, at least in the random oracle model. Achieving unbiasability in the standard model is a different story, though: all known constructions rely on compilers that invariably reduce the efficiency of the VRF from which one starts. In this paper, we look at the unbiasability of existing VRFs in the standard model. Our findings are mostly negative; we show that, essentially, all known constructions are not natively unbiasable. We do so by showing classes of attacks that (almost) completely cover the set of existing VRF constructions. On the positive side, we show that some concrete schemes (and notably the well-known Dodis-Yampolskiy VRF) can be modified to achieve meaningful notions of unbiasability, while retaining their original efficiency.
Last updated:  2025-12-03
Multi-Party Functional Encryption (MPFE): A tool in the distributed and decentralized world
Ruxandra F. Olimid
Functional Encryption (FE) is a concept that generalizes public-key encryption, allowing a party that owns a private key to find a function of the plaintext (instead of the plaintext itself). Multi-Party Functional Encryption (MPFE) generalizes this concept and adapts it to multi-party settings, allowing for decentralization in both the ciphertexts—which might originate from multiple sources—and the keys—thereby eliminating the necessity of a central authority and avoiding the introduction of a single point of trust and failure. The current paper presents a substantial foundation of MPFE to the non-specialist reader. It provides definitions, classifications, and discusses properties of MPFE and its relation with other cryptographic concepts. The potential applicability of MPFE, which covers multiple domains and use cases, is discussed. The paper investigates the real-world adoption of MPFE, including its presence in technical specifications or its availability in open-source libraries. Finally, the current study discusses challenges and therefore opens up new research directions.
Last updated:  2025-12-02
Game-Theoretically Fair Distributed Coin Tossing With Private Preferences
Pedro Branco, Pratik Soni, Sri AravindaKrishnan Thyagarajan, and Ke Wu
Secure coin-tossing is typically modeled as an input-less functionality, where parties with no private inputs jointly generate a fair coin. In the dishonest majority setting, however, a strongly fair coin-tossing protocol is impossible. To circumvent this barrier, recent work has adopted the weaker notion of game-theoretic fairness, where adversaries are rational parties with preferences for specific outcomes, seeking to bias the coin in their favor. Yet these preferences may encode secret information, making prior protocols that assume preferences are public, fundamentally incompatible with privacy. We initiate a comprehensive study of privacy-preserving game-theoretically fair coin-tossing, where the preferences of honest parties remain private. We propose a simulation-based security framework and a new ideal functionality that reconciles both preference-privacy and game-theoretic fairness. A key ingredient is a certifying authority that authenticates each party’s preference and publishes only aggregate statistics, preventing misreporting while hiding parties' preferences. The functionality guarantees that every honest party receives an output: either a uniform coin; or, if an adversary deviates, a coin that strictly decreases the adversarial coalition's expected utility. Within this framework, we construct a protocol realizing our ideal functionality under standard cryptographic assumptions that works for both binary and general $m$-sided coin-tossing. Our schemes tolerate the same optimal (or nearly optimal) corruption thresholds as the best known protocols with public preferences (Wu-Asharov-Shi, EUROCRYPT '22; Thyagarajan-Wu-Soni, CRYPTO '24). Technically, our protocols combine authenticated preferences with an anonymous communication layer that decouples identities from preference-dependent actions, together with a deviation-penalty mechanism that enforces game-theoretic fairness. Our work is the first to reconcile game-theoretic fairness with preference privacy, offering new definitional tools and efficient protocols for rational multi-party computation in dishonest majority settings.
Last updated:  2025-12-02
On the Credibility of Deniable Communication in Court
Jacob Leiken and Sunoo Park
Over time, cryptographically deniable systems have come to be associated in computer-science literature with the idea of "denying" evidence in court — specifically, with the ability to convincingly forge evidence in courtroom scenarios, and relatedly, an inability to authenticate evidence in such contexts. Indeed, in some cryptographic models, the ability to falsify mathematically implies the inability to authenticate. Evidentiary processes in courts, however, have been developed over centuries to account for the reality that evidence has always been forgeable, and relies on factors outside of cryptographic models to seek the truth "as well as possible" while acknowledging that all evidence is imperfect. We argue that deniability does not and need not change this paradigm. Our analysis highlights a gap between technical deniability notions and their application to the real world. There will essentially always be factors outside a cryptographic model that influence perceptions of a message's authenticity, in realistic situations. We propose the broader concept of credibility to capture these factors. The credibility of a system is determined by (1) a threshold of quality that a forgery must pass to be "believable" as an original communication, which varies based on sociotechnical context and threat model, (2) the ease of creating a forgery that passes this threshold, which is also context- and threat-model-dependent, and (3) default system retention policy and retention settings. All three aspects are important for designing secure communication systems for real-world threat models, and some aspects of (2) and (3) may be incorporated directly into technical system design. We hope that our model of credibility will facilitate system design and deployment that addresses threats that are not and cannot be captured by purely technical definitions and existing cryptographic models, and support more nuanced discourse on the strengths and limitations of cryptographic guarantees within specific legal and sociotechnical contexts.
Last updated:  2025-12-02
Polynomial Secret Sharing Schemes and Algebraic Matroids
Amos Beimel, Oriol Farràs, and Adriana Moya
Polynomial secret sharing schemes generalize the linear ones by adding more expressivity and giving room for efficiency improvements. In these schemes, the secret is an element of a finite field, and the shares are obtained by evaluating polynomials on the secret and some random field elements, i.e., for every party there is a set of polynomials that computes the share of the party. Notably, for general access structures, the best known polynomial schemes have better share size than the best known linear ones. This work investigates the properties of the polynomials and the fields that allow these efficiency gains and aims to characterize the access structures that benefit from them. We focus first on ideal schemes, which are optimal in the sense that the size of each share is the size of the secret. We prove that, under some restrictions, if the degrees of the sharing polynomials are not too big compared to the size of the field, then its access structure is a port of an algebraic matroid. This result establishes a new connection between ideal schemes and matroids, extending the known connection between ideal linear schemes and linearly representable matroids. For general polynomial schemes, we extend these results and analyze their privacy and correctness. Additionally, we show that given a set of polynomials over a field of large characteristic, one can construct linear schemes that realize the access structure determined by these polynomials; as a consequence, polynomial secret sharing schemes over these fields are not stronger than linear schemes. While there exist ports of algebraic matroids that do not admit ideal schemes, we demonstrate that they always admit schemes with statistical security and information ratio tending to one. This is achieved by considering schemes where the sharings are points in algebraic varieties.
Last updated:  2025-12-02
An Improved Quantum Algorithm for 3-Tuple Lattice Sieving
Lynn Engelberts, Yanlin Chen, Amin Shiraz Gilani, Maya-Iggy van Hoof, Stacey Jeffery, and Ronald de Wolf
The assumed hardness of the Shortest Vector Problem in high-dimensional lattices is one of the cornerstones of post-quantum cryptography. The fastest known heuristic attacks on SVP are via so-called sieving methods. While these still take exponential time in the dimension $d$, they are significantly faster than non-heuristic approaches and their heuristic assumptions are verified by extensive experiments. $k$-Tuple sieving is an iterative method where each iteration takes as input a large number of lattice vectors of a certain norm, and produces an equal number of lattice vectors of slightly smaller norm, by taking sums and differences of $k$ of the input vectors. Iterating these ''sieving steps'' sufficiently many times produces a short lattice vector. The fastest attacks (both classical and quantum) are for $k=2$, but taking larger $k$ reduces the amount of memory required for the attack. In this paper we improve the quantum time complexity of 3-tuple sieving from $2^{0.3098 d}$ to $2^{0.2846 d}$, using a two-level amplitude amplification aided by a preprocessing step that associates the given lattice vectors with nearby ''center points'' to focus the search on the neighborhoods of these center points. Our algorithm uses $2^{0.1887d}$ classical bits and QCRAM bits, and $2^{o(d)}$ qubits. This is the fastest known quantum algorithm for SVP when total memory is limited to $2^{0.1887d}$.
Last updated:  2025-12-02
ALIOTH: An Efficient and Secure Weight-of-Evidence Framework for Privacy-Preserving Data Processing
Ye Dong, Xiangfu Song, W.j Lu, Xudong Chen, Yaxi Yang, Ruonan Chen, Tianwei Zhang, and Jin-Song Dong
Secure two-party computation (2PC)-based privacy-preserving machine learning (ML) has made remarkable progress in recent years. However, most existing works overlook the privacy challenges that arise during the data preprocessing stage. Although some recent studies have introduced efficient techniques for privacy-preserving feature selection and data alignment on well-structured datasets, they still fail to address the privacy risks involved in transforming raw data features into ML-effective numerical representations. In this work, we present ALIOTH, an efficient 2PC framework that securely transforms raw categorical and numerical features into Weight-of-Evidence (WoE)-based numerical representations under both vertical and horizontal data partitions. By incorporating our proposed partition-aware 2PC protocols and vectorization optimizations, ALIOTH efficiently generates WoE-transformed datasets in secret. To demonstrate scalability, we conduct experiments on diverse datasets. Notably, ALIOTH can transform 3 million data samples with 100 features securely within half an hour over a wide-area network. Furthermore, ALIOTH can be seamlessly integrated with existing 2PC-based ML frameworks. Empirical evaluations on real-world financial datasets show ALIOTH improves both the predictive performance of logistic regression and 2PC training efficiency.
Last updated:  2025-12-02
Abuse Resistant Traceability with Minimal Trust for Encrypted Messaging Systems
Zhongming Wang, Tao Xiang, Xiaoguo Li, Guomin Yang, Biwen Chen, Ze Jiang, Jiacheng Wang, Chuan Ma, and Robert H. Deng
Encrypted messaging systems provide end-to-end security for users but obstruct content moderation, making it difficult to combat online abuses. Traceability offers a promising solution by enabling platforms to identify the originator/spreader of messages, yet this capability can be abused for mass surveillance of innocent messages. To mitigate this risk, existing approaches restrict traceability to (problematic) messages that are reported by multiple users or are on a predefined blocklist. However, these solutions either overtrust a specific entity (e.g., the party defining the blocklist) or rely on the unrealistic assumption of non-collusion between servers run by a single platform. In this paper, we propose an abuse-resistant source tracing scheme that distributes traceability across distinct real-world entities. Specifically, we formally define its syntax and prove its security properties. Our scheme realizes two essential principles: minimal trust, which ensures that traceability cannot be abused as long as a single participant involved in tracing is honest, even if all others collude; and minimal information disclosure, which prevents participants from acquiring any information (e.g., communication parties' identities) unnecessary for tracing. We implemented our scheme using techniques deployed by Signal, and our evaluation shows it offers comparable performance to state-of-the-art schemes that are vulnerable to abuse.
Last updated:  2025-12-02
BEANIE – A 32-bit Cipher for Cryptographic Mitigations against Software Attacks
Simon Gerhalter, Samir Hodžić, Marcel Medwed, Marcel Nageler, Artur Folwarczny, Ventzi Nikov, Jan Hoogerbrugge, Tobias Schneider, Gary McConville, and Maria Eichlseder
In modern CPU architectures, various security features to mitigate software attacks can be found. Examples of such features are logical isolation, memory tagging or shadow stacks. Basing such features on cryptographic isolation instead of logical checks can have many advantages such as lower memory overhead and more robustness against misconfiguration or low-cost physical attacks. The disadvantage of such an approach is however that the cipher that has to be introduced has a severe impact on the system performance, either in terms of additional cycles or a decrease of the maximum achievable frequency. Finally, as of today, there is no suitable low-latency cipher design available for encrypting 32-bit words as is common in microcontrollers. In this paper, we propose a 32-bit tweakable block cipher tailored to memory encryption for microcontroller units. We optimize this cipher for low latency, which we achieve by a careful selection of components for the round function and leveraging an attack scenario similar to the one used to analyze the cipher SCARF. To mitigate some attack vectors introduced by this attack scenario, we deploy a complex tweak-key schedule. Due to the shortage of suitable 32-bit designs, we compare our design to various low-latency ciphers with different block sizes. Our hardware implementation shows competitive latency numbers.
Last updated:  2025-12-02
Fully Adaptive Threshold IBE and Signatures in the Standard Model
Jiayun Yan, Yu Li, Jie Chen, Haifeng Qian, Xiaofeng Chen, and Debiao He
We present fully adaptive secure threshold IBE and threshold signatures, which rely on the $k$-Linear assumption in the standard model over asymmetric pairing groups. In particular, our threshold signature scheme achieves a non-interactive signing process and an adaptively secure guarantee as strong as Das-Ren (CRYPTO'24), while their proof relies on the random oracle model. We achieve our results by following steps: First, we design two threshold IBE schemes against adaptive corruptions in the composite-order and prime-order groups by adopting the dual system groups encoding. Second, we provide a generic transform from threshold IBE to threshold signatures, following Naor's paradigm, which reduces the fully adaptive corruption security of threshold signatures to threshold IBE. Third, we present two threshold signatures instantiations in composite-order and prime-order groups.
Last updated:  2025-12-02
Audience Injection Attacks: A New Class of Attacks on Web-Based Authorization and Authentication Standards
Pedram Hosseyni, Ralf Kuesters, and Tim Würtele
We introduce audience injection attacks, a novel class of vulnerabilities that impact widely used Web-based authentication and authorization protocols, including OAuth 2.0, OpenID Connect, FAPI, CIBA, the Device Authorization Grant, and various well-established extensions, such as Pushed Authorization Requests, Token Revocation, Token Introspection, and their numerous combinations. These protocols underpin services for billions of users across diverse ecosystems worldwide, spanning low-risk applications like social logins to high-risk domains such as open banking, insurance, and healthcare. Audience injection attacks exploit a critical weakness in a core security mechanism of these protocols - the handling of so-called audiences in signature-based client authentication mechanisms. This vulnerability allows attackers to compromise fundamental security objectives whenever these mechanisms are utilized across two or more server endpoints. They enable the attacker to impersonate users and gain unauthorized access to their resources, even in high-security protocol families specifically designed for sensitive applications. We responsibly disclosed these vulnerabilities to the relevant standardization bodies, which recognized their severity. In collaboration with these organizations, we developed fixes and supported a coordinated response, leading to an ongoing effort to update a dozen of standards, numerous major implementations, and far-reaching ecosystems.
Last updated:  2025-12-02
One-way Functions and Boundary Hardness of Randomized Time-Bounded Kolmogorov Complexity
Yanyi Liu and Rafael Pass
We revisit the question of whether worst-case hardness of the time-bounded Kolmogorov complexity problem, $\KpolyA$---that is, determining whether a string is ``structured" (i.e., $K^t(x) < n-1$) or ``random" (i.e., $K^{\poly(t)} \geq n-1$)---suffices to imply the existence of one-way functions (OWF). Liu-Pass (CRYPTO'25) recently showed that worst-case hardness of a \emph{boundary} version of $\KpolyA$---where, roughly speaking, the goal is to decide whether given an instance $x$, (a) $x$ is $K^\poly$-random (i.e., $K^{\poly(t)}(x) \geq n-1$), or just close to $K^\poly$-random (i.e., $K^{t}(x) < n-1$ \emph{but} $K^{\poly(t)}> n - \log n$)---characterizes OWF, but with either of the following caveats (1) considering a non-standard notion of \emph{probabilistic $K^t$}, as opposed to the standard notion of $K^t$, or (2) assuming somewhat strong, and non-standard, derandomization assumptions. In this paper, we present an alternative method for establishing their result which enables significantly weakening the caveats. First, we show that boundary hardness of the more standard \emph{randomized} $K^t$ problem suffices (where randomized $K^t(x)$ is defined just like $K^t(x)$ except that the program generating the string $x$ may be randomized). As a consequence of this result, we can provide a characterization also in terms of just ``plain" $K^t$ under the most standard derandomization assumption (used to derandomize just $\BPP$ into $\P$)---namely $\E \not\subseteq {\sf ioSIZE}[2^{o(n)}]$. Our proof relies on language compression schemes of Goldberg-Sipser (STOC'85); using the same technique, we also present the the first worst-case to average-case reduction for the \emph{exact} $\KpolyA$ problem (under the same standard derandomization assumption), improving upon Hirahara's celebrated results (STOC'18, STOC'21) that only applied to a \emph{gap} version of the $\KpolyA$ problem, referred to as $\GapKpolyA$, where the goal is to decide whether $K^t(x) \leq n-O(\log n))$ or $K^{\poly(t)}(x) \geq n-1$ and under the same derandomization assumption.
Last updated:  2025-12-02
Hardware Implementation of Stealthy and Lightweight Backdoor for CRYSTALS-Kyber
Suraj Mandal, Prasanna Ravi, M Dhilipkumar, Debapriya Basu Roy, and Anupam Chattopadhyay
The threat of practical quantum attacks has catapulted viable alternatives like Post-Quantum Cryptography (PQC) into prominence. The adoption and integration of standardized PQC primitives across the entire digital stack are promoted by various standardization bodies, governments, and major corporate houses. A serious challenge in quantum migration is to ensure that there is no hidden backdoor in the PQC implementations of a hybrid cryptosystem (support for both pre-quantum and post-quantum algorithms), which are often procured from a third-party vendor. In this manuscript, we investigate the possibility of a kleptographic backdoor on the NIST-recommended key-encapsulation mechanism CRYSTALS-Kyber. The modified Kyber key-generation algorithm achieves indistinguishable decryption failure probability compared to the original CRYSTALS-Kyber. The kleptographic module is also implemented in FPGA, embedded inside the CRYSTALS- Kyber accelerator with a very low area overhead (283 LUTs or 2% of total area), and thus can easily pass performance and functionality tests.
Last updated:  2025-12-02
Decentralized Data Archival: New Definitions and Constructions
Elaine Shi, Rose Silver, and Changrui Mu
We initiate the study of a new abstraction called incremental decentralized data archival (${\sf iDDA}$). Specifically, imagine that there is an ever-growing, massive database such as a blockchain, a comprehensive human knowledge base like Wikipedia, or the Internet archive. We want to build a decentralized archival of such datasets to ensure long-term robustness and sustainability. We identify several important properties that an ${\sf iDDA}$ scheme should satisfy. First, to promote heterogeneity and decentralization, we want to encourage even weak nodes with limited space (e.g., users' home computers) to contribute. The minimum space requirement to contribute should be approximately independent of the data size. Second, if a collection of nodes together receive rewards commensurate with contributing a total of $m$ blocks of space, then we want the following reassurances: 1) if $m$ is at least the database size, we should be able to reconstruct the entire dataset; and 2) these nodes should actually be commiting roughly $m$ space in aggregate --- even when $m$ is much larger than the data size, the nodes should be storing redundant copies of the database rather than storing just one copy, and yet impersonating arbitrarily many pseudonyms to get unbounded rewards. We propose new definitions that mathematically formalize the aforementioned requirements of an ${\sf iDDA}$ scheme. We also devise an efficient construction in the random oracle model which satisfies the desired security requirements. Our scheme incurs only $\widetilde{O}(1)$ audit cost, as well as $\widetilde{O}(1)$ update cost for both the publisher and each node, where $\widetilde{O}(\cdot)$ hides polylogarithmic factors. Further, the minimum space provisioning required to contribute is as small as polylogarithmic. Our construction exposes several interesting technical challenges. Specifically, we show that a straightforward application of the standard hierarchical data structure fails, since both our security definition and the underlying cryptographic primitives we employ lack the desired compositional guarantees. We devise novel techniques to overcome these compositional issues, resulting in a construction with provable security while still retaining efficiency. Finally, our new definitions also make a conceptual contribution, and lay the theoretical groundwork for the study of ${\sf iDDA}$. We raise several interesting open problems along this direction.
Last updated:  2025-12-02
Cryptanalysis on Asymmetric Structured Key Agreement Schemes
Koki Jimbo
We study several asymmetric structured key agreement schemes based on noncommutative matrix operations, including the recent proposal of Lizama as well as the strongly asymmetric algorithms SAA-3 and SAA-5 of Accardi et al.\ We place them in a common algebraic framework for public key agreement and identify simple structural conditions under which an eavesdropper can reconstruct an effective key-derivation map and reduce key recovery to solving linear systems over finite fields. We then show that the three matrix-based schemes mentioned above all instantiate our algebraic framework and can therefore be broken in polynomial time from public information alone. In particular, their security reduce to the hardness of linear-algebraic problems and does not exceed that of the underlying discrete logarithm problem. Our results demonstrate that the weakness of these schemes is structural rather than parametric, and that minor algebraic modifications are insufficient to repair them.
Last updated:  2025-12-01
SVP$_p$ is Deterministically NP-Hard for all $p > 2$, Even to Approximate Within a Factor of $2^{\log^{1-\varepsilon} n}$
Isaac M Hair and Amit Sahai
We prove that SVP$_p$ is NP-hard to approximate within a factor of $2^{\log^{1 - \varepsilon} n}$, for all constants $\varepsilon > 0$ and $p > 2$, under standard deterministic Karp reductions. This result is also the first proof that \emph{exact} SVP$_p$ is NP-hard in a finite $\ell_p$ norm. Hardness for SVP$_p$ with $p$ finite was previously only known if NP $\not \subseteq$ RP, and under that assumption, hardness of approximation was only known for all constant factors. As a corollary to our main theorem, we show that under the Sliding Scale Conjecture, SVP$_p$ is NP-hard to approximate within a small polynomial factor, for all constants $p > 2$. Our proof techniques are surprisingly elementary; we reduce from a regularized PCP instance directly to the shortest vector problem by using simple gadgets related to Vandermonde matrices and Hadamard matrices.
Last updated:  2025-12-01
Weight of Polynomial Products Mod $(X^n+1)$-Application to the HQC Cryptosystem-
Laila El Aimani
We consider the following problem: given two random polynomials $x$ and $y$ in the ring $\F_2[X]/(X^n+1)$, our goal is to compute the expectation and variance of the weight of their product $x\cdot y$, where the weight of a binary polynomial is defined as the number of its nonzero coefficients. We consider two models for random polynomials $x$ and $y$: (1) the uniform slice case with fixed weights $w_x,w_y$, and (2) the binomial case where their coefficients are independent Bernoulli variables with success probabilities $p_x$ and $p_y$ respectively. Our work finds a direct application in the accurate analysis of the decryption failure rate for the HQC code-based encryption scheme. The original construction relied on heuristic arguments supported by experimental data. Later, Kawachi provided a formally proven security bound, albeit a much weaker one than the heuristic estimate in the original construction. A fundamental limitation of both analyses is their restriction to the binomial case, a simplification that compromises the resulting security guarantees. Our analysis provides the first precise computation of the expectation and variance of weight($x\cdot y$) across both the uniform slice and binomial models. The results confirm the soundness of the HQC security guarantees and allow for a more informed choice of the scheme parameters that optimizes the trade-off security and efficiency.
Last updated:  2025-12-01
Policy Compliant Secure Messaging
Joël Alwen, Xiaohui Ding, Sanjam Garg, and Yiannis Tselekounis
We initiate the holistic study of Policy Compliant Secure Messaging (PCSM). A content policy is a predicate over messages deciding which messages are considered harmful and which not. A PCSM protocol is a type of end-to-end encrypted (E2EE) messaging system that guarantees E2EE privacy and authenticity for all policy compliant messages but detects and verifiably reports harmful content prior to its delivery. This stands in contrast to prior content moderation systems for E2EE messaging where detection relies on receivers reporting the harmful content themselves which makes them unsuited for most PCSM applications (e.g., for preventing the wilful distribution of harmful content). Our holistic PCSM notion explicitly captures several new roles such as policy creator, auditor and judge, to more accurately separate and model the different goals and security concerns of stakeholders when deploying PCSM. We present efficient PCSM constructions for arbitrary policy classes, as well as for hash-based ones, achieving various levels of security, while maintaining the core security properties of the underlying E2EE layer. For hash-based PCSM, we encapsulate Apple’s recent PSI protocol used in their content moderation system, and we properly adapt it to realize the desired PCSM functionality, and analyze the resulting protocol’s security. To our knowledge, our work is the first that rigorously study Apple’s PSI for server-side content moderation within the broader context of secure messaging, addressing the diverse goals and security considerations of stakeholders when deploying larger systems.
Last updated:  2025-12-01
TAPIR: A Two-Server Authenticated PIR Scheme with Preprocessing
Francesca Falzon, Laura Hetz, and Annamira O'Toole
Authenticated Private Information Retrieval (APIR) enables a client to retrieve a record from a public database and verify that the record is “authentic” without revealing any information about which record was requested. In this work, we propose Tapir: the first two-server APIR scheme to achieve both sublinear communication and computation complexity for queries, while also supporting dates. Our scheme builds upon the unauthenticated two-server PIR scheme SinglePass (Lazzaretti and Papamanthou, USENIX’24). Due to its modular design, Tapir provides different trade-offs depending on the underlying vector commitment scheme used. Moreover, Tapir is the first APIR scheme with preprocessing to support appends and edits in time linear in the database partition size. This makes it an ideal candidate for transparency applications that require support for integrity, database appends, and private lookups. We provide a formal security analysis and a prototype implementation that demonstrates our scheme’s efficiency. Tapir incurs as little as 0.11 % online bandwidth overhead for databases of size $2^{22}$, compared to the unauthenticated SinglePass. For databases of size $\geq 2^{20}$, our scheme, when instantiated with Merkle trees, outperforms all prior multi-server APIR schemes with respect to online runtime.
Last updated:  2025-12-01
SPECK: Signatures from Permutation Equivalence of Codes and Kernels
Marco Baldi, Michele Battagliola, Rahmi El Mechri, Paolo Santini, Riccardo Schiavoni, and Davide De Zuane
The ongoing search for secure post-quantum cryptographic primitives has led to the development of numerous novel digital signature schemes. In this paper we introduce $\mathsf{SPECK}$, a new signature protocol based on the similarities between the Permuted Kernel Problem ($\mathsf{PKP}$) and the Permutation Code Equivalence Problem ($\mathsf{PEP}$). At its core, $\mathsf{SPECK}$ is built on the permutation version of LESS, but introduces a key modification to the commitment step. Indeed, instead of committing to an entire permuted code, the prover commits to a random relaxed $\mathsf{PKP}$ (that we call $\mathsf{PECK}$, Permutation Equivalence of Codes and Kernel) instance by randomly choosing a codeword from a random permutation of the initial code. In this sense, the secret key is used as a trapdoor to solve the committed $\mathsf{PECK}$ instance. The new approach allows for a faster verification that does not involve gaussian elimination, while maintains roughly the same signature size as LESS. We present the $\mathsf{Speck}$ protocol in detail and provide a deep analysis of the security of the new introduced assumptions.
Last updated:  2025-12-01
Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog
Peter Gutmann and Stephan Neuhaus
This paper presents implementations that match and, where possible, exceed current quantum factorisation records using a VIC-20 8-bit home computer from 1981, an abacus, and a dog. We hope that this work will inspire future efforts to match any further quantum factorisation records, should they arise.
Last updated:  2025-12-01
Committing Authenticated Encryption: Generic Transforms with Hash Functions
Shan Chen and Vukašin Karadžić
Recent applications and attacks have highlighted the need for authenticated encryption (AE) schemes to achieve the so-called committing security beyond privacy and authenticity. As a result, several generic solutions have been proposed to transform a non-committing AE scheme to a committing one, for both basic unique-nonce security and advanced misuse-resistant (MR) security. We observe that all existing practical generic transforms are subject to at least one of the following limitations: (i) not committing to the entire encryption context, (ii) involving non-standard primitives, (iii) not being a black-box transform, (iv) providing limited committing security. Furthermore, so far, there has been no generic transform that can directly elevate a basic AE scheme to a committing AE scheme that offers MR security. Our work fills these gaps by developing black-box generic transforms that crucially rely on hash functions, which are well standardized and widely deployed. First, we construct three basic transforms that combine AE with a single hash function, which we call $\mathsf{HtAE}, \mathsf{AEaH}$ and $\mathsf{EtH}$. They all guarantee strong security, and $\mathsf{EtH}$ can be applied to both AE and basic privacy-only encryption schemes. Next, for MR security, we propose two advanced hash-based transforms that we call $\mathsf{AEtH}$ and $\mathsf{chaSIV}$. $\mathsf{AEtH}$ is an MRAE-preserving transform that adds committing security to an MR-secure AE scheme. $\mathsf{chaSIV}$ is the first generic transform that can directly elevate basic AE to one with both committing and MR security; moreover, $\mathsf{chaSIV}$ also works with arbitrary privacy-only encryption schemes. Both of them feature a simple design and ensure strong security. For performance evaluation, we compare our transforms to similar existing ones, both in theory and through practical implementations. The results show that our $\mathsf{AEaH}$ achieves the highest practical efficiency among basic transforms, while $\mathsf{AEtH}$ excels in MRAE-preserving transforms. Our MRAE-lifting transform $\mathsf{chaSIV}$ demonstrates comparable performance to MRAE-preserving ones and surpasses them for messages larger than approximately $360$ bytes; for longer messages, it even outperforms the benchmark, non-committing standardized $\mathsf{AES}\text{-}\mathsf{GCM}\text{-}\mathsf{SIV}$.
Last updated:  2025-12-01
Revisiting Rational Broadcast Protocols
Shunya Otomo and Kenji Yasunaga
A recent study by Yamashita and Yasunaga (GameSec 2023) presented a constant-round deterministic broadcast protocol secure against \emph{detection-averse} adversaries --- those who prefer to attack without being detected. In this work, we revisit their protocol and observe that it remains secure even against a broader class of adversaries, not necessarily detection-averse. We formalize its detection mechanism as \emph{local detectability} and construct broadcast protocols with local detectability that address two weaknesses of the original protocol: (1) it only guarantees weak validity, and (2) it may cause false detections. Our first protocol achieves round complexity four against rational adversaries and $t+4$ against malicious adversaries, where the adversary corrupts at most $t$ parties. Our second protocol achieves the optimal round complexity of $t+1$ for malicious adversaries, while the round complexity is four against detection-averse adversaries.
Last updated:  2025-12-01
Systems Security Foundations for Agentic Computing
Mihai Christodorescu, Earlence Fernandes, Ashish Hooda, Somesh Jha, Johann Rehberger, and Khawaja Shams
This paper articulates short- and long-term research problems in AI agent security and privacy, using the lens of computer systems security. This approach examines end-to-end security properties of entire systems, rather than AI models in isolation. While we recognize that hardening a single model is useful, it is important to realize that it is often insufficient. By way of an analogy, creating a model that is always helpful and harmless is akin to creating software that is always helpful and harmless. The collective experience of decades of cybersecurity research and practice shows that this is insufficient. Rather, constructing an informed and realistic attacker model before building a system, applying hard-earned lessons from software security, and continuous improvement of security posture is a tried-and-tested approach to securing real computer systems. A key goal is to examine where research challenges arise when applying traditional security principles in the context of AI agents. A secondary goal of this report is to distill these ideas for AI and ML practitioners and researchers. We discuss the challenges of applying security principles to agentic computing, present 11 case studies of real attacks on agentic systems, and define a series of new research problems specific to the security of agentic systems.
Last updated:  2025-12-01
Efficient Garbled Pseudorandom Functions and Lookup Tables from Minimal Assumption
Wei-Kai Lin, Zhenghao Lu, and Hong-Sheng Zhou
Yao's garbled circuits have received huge attention in both theory and practice. While garbled circuits can be constructed using minimal assumption (i.e., the existence of pseudorandom functions or one-way functions), the state-of-the-art constructions (e.g., Rosulek-Roy, Crypto 2021) are based on stronger assumptions. In particular, the ``Free-XOR'' technique (Kolesnikov-Schneider, ICALP 2008) is essential in these state-of-the-art constructions, and their security can only be proven in the random oracle model, or rely on the ``circular-correlation robust hash'' assumption. In this paper, we aim to develop new techniques to construct efficient garbling schemes using minimal assumptions. Instead of generically replacing the Free-XOR technique, we focus on garbling schemes for specific functionalities. We successfully eliminated the need for Free-XOR in several state-of-the-art schemes, including the one-hot garbling (Heath and Kolesnikov, CCS 2021) and the garbled pseudorandom functions, and the garbled lookup tables (Heath, Kolesnikov and Ng, Eurocrypt 2024). Our schemes are based on minimal assumptions, i.e., standard pseudorandom functions (PRFs)---we resolved the need for circular security. The performance of our scheme is almost as efficient as the best results except for a small constant factor. Namely, for any lookup table $\{0,1\}^n \to \{0,1\}^m$, our scheme takes $n + (5n+9)m\lambda + 2^n \cdot m$ bits of communication, where $\lambda$ is the security parameter of PRF.
Last updated:  2025-12-01
Practical Asynchronous Distributed Key Reconfiguration and Its Applications
Hanwen Feng, Yingzi Gao, Yuan Lu, Qiang Tang, and Jing Xu
In this paper, we study practical constructions of asynchronous distributed key reconfiguration ($\mathsf{ADKR}$), which enables an asynchronous fault-tolerant system with an existing threshold cryptosystem to efficiently generate a new threshold cryptosystem for a reconfigured set of participants. While existing asynchronous distributed threshold key generation ($\mathsf{ADKG}$) protocols theoretically solve $\mathsf{ADKR}$, they fail to deliver satisfactory scalability due to cubic communication overhead, even with simplifications to the reconfiguration setting. We introduce an efficient \textit{share-dispersal-then-agree-and-recast} paradigm for constructing efficient $\mathsf{ADKR}$ while preserving adaptive security. Our method reduces the total overhead to $O(\kappa n^2)$ from $O(n^3)$, where $\kappa$ is a small constant (typically $\sim$30 or less). And our further optimizations in $\mathsf{PVSS}$ minimize redundant computations across different parties and reduce the dominating $\mathsf{PVSS}$ verification cost by about one-third. Our techniques developed for $\mathsf{ADKR}$ can also be leveraged to improve the asymptotic efficiency of various other asynchronous protocols: (i) it implies the first (coin-assisted) quadratic-communication $\mathsf{ADKG}$; and (ii) it can be extended to realize the first quadratic-communication asynchronous dynamic proactive secret sharing ($\mathsf{APSS}$) with adaptive security. Experimental evaluations on a global network of 256 AWS servers show up to 40\% lower latency compared to the state-of-the-art $\mathsf{ADKG}$ protocols that are simplified to the reconfiguration setting, highlighting the practicality of our $\mathsf{ADKR}$ in large-scale asynchronous systems.
Last updated:  2025-11-30
Data Availability Sampling with Repair
Dan Boneh, Joachim Neu, Valeria Nikolaenko, and Aditi Partap
Data availability sampling (DAS) is an important technique to horizontally scale consensus protocols without compromising on the number of adversarial nodes that can be tolerated. DAS is on the technical roadmap of major blockchains such as Ethereum. A major challenge for DAS schemes, that has not been formally studied in the literature, is how incomplete shares can be repaired. The need for repairing data shares motivates key aspects of Ethereum's DAS-based sharding vision called "Danksharding". In this work, we make two contributions. First, we provide a new definitional framework that formalizes the notion of local repair, along with the security guarantees that a DAS scheme must provide. Second, we propose a new DAS scheme designed with efficient local repair in mind, based on locally-correctable multiplicity codes. To facilitate using these codes, we introduce a new multivariate polynomial commitment scheme that (i) supports efficient openings of partial derivatives of a committed polynomial, (ii) supports fast batch opening proof generation at many points, and (iii) has an algorithm to recompute (repair) opening proofs at a point from only a few other proofs. The proposed scheme improves upon the state-of-the-art Ethereum PeerDAS scheme, deployed in December 2025, in storage overhead, local repair bandwidth and coordination, while only slightly increasing dispersal cost and sampling bandwidth. As an additional benefit, our construction also supports efficient partial reconstruction, i.e., retrieving parts of the stored data. All of our techniques readily carry over to data availability schemes based on verifiable information dispersal (VID) as well.
Last updated:  2025-11-30
Crypto Wars in Secure Messaging: Covert Channels in Signal Despite Leaked Keys
Uncategorized
Mohammadamin Rakeei, Rosario Giustolisi, Andy Rupp, Chuanwei Lin, and Gabriele Lenzini
Show abstract
Uncategorized
End-to-end encryption (E2EE) is the foundation of modern secure messaging, with the Signal protocol as the de facto standard in applications such as Signal, WhatsApp, Facebook Messenger and Google Messages. At the same time, the deployment of E2EE has led to growing pressure from authorities to decrypt user traffic under lawful enforcement. This raises a critical question: if an adversary can routinely decrypt Signal messages (for example via a mandated access or a leaked key), can users still communicate securely and covertly? We address this question through the lens of anamorphic encryption, which enables hidden communication within seemingly legitimate ciphertexts, even against an adversary who can decrypt them. We design two constructions that embed covert channels into the existing Signal Double Ratchet protocol. Concretely, we show how to embed covert messages (i) into Diffie-Hellman keys used in the asymmetric ratchet, or (ii) into authentication tags produced in the symmetric ratchet. Our techniques are compatible with existing Signal-style deployments and require no changes by the service provider. We formalize security in threat models that capture adversaries with decryption capabilities granted through lawful-access mechanisms, and prove that the resulting protocol transcripts are indistinguishable from those of standard Signal. We implement our constructions in the official Signal library and Android client, and show that they incur low overhead and are practical in real-world settings. Our results show that covert communication channels can persist even when conventional E2EE guarantees are compromised.
Last updated:  2025-11-29
Efficient GHASH and POLYVAL Implementation Using Polynomial Multiplication: Optimized 64-bit Decomposition with Bit-Reversal Elimination
Mamone Tarsha Kurdi and Niels Möller
We present an optimized implementation of the GHASH and POLYVAL authentication algorithms used in AES-GCM and AES-GCM-SIV that eliminates the computational overhead of bit-reversal operations. Our approach computes these universal hash functions directly in bit-reversed representation, matching the native format used by carry-less multiplication instructions available on modern processors. The algorithm exploits 64-bit polynomial primitives and parallel execution on superscalar architectures. We achieve performance of 0.34 cycles/byte on POWER9 (35% faster than OpenSSL) and 0.33 cycles/byte on Intel Comet Lake (11% faster than OpenSSL), representing a 32-fold improvement over table-based software implementations. Combined with hardware accelerated AES, the complete AES-GCM mode achieves 1.12 cycles/byte throughput. For platforms with hardware carry-less multiplication (x86 PCLMULQDQ, ARM PMULL, PowerPC vpmsumd), the R/F algorithm achieves ∼1.7× speedup over Karatsuba. For portable software implementations without hardware acceleration, we demonstrate that Karatsuba remains 1.4-1.6× faster due to reduced multiplication count.
Last updated:  2025-11-29
Lattice-Based Linkable Ring Signatures for Anonymous and Accountable Whistleblowing
Vishal Pareek, Aditi Kar Gangopadhyay, and Sugata Gangopadhyay
Ring signatures allow an individual to sign a message on behalf of a group in such a way that the verifier can only confirm that someone in the group signed it, but cannot identify the actual signer. This strong anonymity, while desirable, may also be exploited for repeated or harmful activities. Linkable ring signatures mitigate this issue by enabling the system to recognise whether two signatures originate from the same signer, while still keeping the signer anonymous. Such constructions are essential in domains like e-voting, e-cash, privacy-preserving blockchain systems, and whistleblowing, where detecting repeated actions—such as double-spending or double-voting—is necessary to maintain system reliability. In this paper, we present a lattice-based linkable ring signature scheme designed to withstand quantum-era adversaries. The framework relies on exact and efficient zero-knowledge proofs, and employs a weak pseudorandom function (wPRF) to enable linkability. To demonstrate both ring membership and the generation of a unique tag, we integrate a Merkle tree accumulator, which also streamlines the verification steps. The scheme is instantiated using concrete parameter choices, allowing us to precisely evaluate how the signature size scales with different ring sizes. An important feature of our design is that it eliminates the need for trapdoor techniques, yet still produces a signature of roughly 0.22 MB when the ring contains 2^10 users. We further outline practical application scenarios, such as anonymous but accountable whistleblowing, to highlight the usefulness of the proposed construction.
Last updated:  2025-11-29
On the Plaintext Awareness of AEAD Schemes
Mario Marhuenda Beltrán and Mustafa Khairallah
Plaintext-awareness of AEAD schemes is one of the more obscure and easily misunderstood notions. Originally proposed by Andreeva et al., Mennink and Talnikar showed in 2025 that the original definitions are vague and leave too much room for interpretation. They presented new definitions and analyzed the three main AEAD compositions relative to the new definitions. In particular, they showed that MAC-then-Encrypt (MtE) is not plaintext-aware. However, they showed that an SIV-style variant is indeed plaintext-aware. In this work, we first show that their analysis contains a gap, voiding their proof. We show this by showing several attacks; against their choice of extractor, with trivial complexity, and against any extractor, with birthday-bound complexity. Next, we re-establish their results by designing a new extractor that captures their intended goal and prove a tight PA1 security bound. We also show that the result is not dependent on the encryption scheme used, by showing that an extractor can also be designed for sMtE[ΘCB3], a variant where the encryption step is done by an ΘCB3-like scheme. Afterwards, we strengthen their results, by revisiting other compositions. In particular, we show that Encrypt-then-MAC (EtM) is in fact PA2-secure. Furthermore, we show that SIV-style MtE cannot be PA2-secure. Additionally, we show that Encode-then-Encipher is also PA2-secure, but not beyond the first successful forgery. More importantly, we show that up to some necessary assumptions, PA2 and RAE are equivalent. This result, while positive, holds only up to the first successful forgery. This indicates that the PA2 formalization maybe too strong for practical schemes, since we believe RAE sufficient for intuitive security. Last but not least, we investigate the value of the notion of plaintext awareness. We look deeper into the relation between plaintext awareness and confidentiality. We show that the problem of confidentiality in the presence of release of unverified plaintext can be seen as a confidentiality-with-leakage problem in the simulatable leakage framework. In this regard, we start, by showing that PA1 security cannot imply confidentiality with leakage. Similarly, we compare our results to the AERUP notion of TOSC 2019. We show that a scheme can be AERUP secure but not secure against a somewhat straightforward left-or-right attack in the same model. This puts into question the meaning and relevance of the PA1 and AERUP security notions. Results based on these notions can be seen as providing security in a two-phase game, where the adversary does not observe any leakage after the first challenge query, as we argue in the paper. On the positive side, we show that if a scheme achieves IND-CPA, INT-RUP and PA2, then it achieves confidentiality with leakage for the appropriate leakage function. This closes a gap in the literature on the relation between confidentiality with leakage and RUP notions.
Last updated:  2025-11-29
Scalable Private World Computer via Root iO: Application-Agnostic iO and Our Roadmap for Making It Practical
Sora Suegami and Enrico Bottazzi
Ethereum has established itself as a world computer, enabling general-purpose, decentralized, and verifiable computation via smart contracts on a globally replicated state. However, because all computations and state are public by default, it is fundamentally unsuitable for confidential smart contracts that jointly process private data from multiple users. This motivates the notion of a private world computer: an ideal future form of Ethereum that preserves its integrity and availability guarantees while supporting such confidential smart contracts. Prior constructions based on implementable cryptographic primitives such as fully homomorphic encryption (FHE) inevitably rely on committees that hold secret shares and perform computations using those shares, a capability that is not provided by today's Ethereum validators. We cannot simply modify the Ethereum protocol so as to shift the committee’s role onto the Ethereum validators, because the computational and communication costs borne by the committee grow with the demand for confidential smart contracts, forcing higher hardware requirements for participation, undermining decentralization, and increasing the risk of malicious collusion. Hence, there remains a fundamental trade-off between committee decentralization and scalability for confidential smart contracts. In this position paper, we make two contributions toward a scalable private world computer. First, we show how indistinguishability/ideal obfuscation (iO), combined with FHE and succinct non-interactive arguments of knowledge (SNARK), yields a private world computer that, after a one-time obfuscation process, introduces no additional ongoing trust assumptions beyond Ethereum’s validators, incurring no additional overhead for validators to process confidential smart contracts compared to public smart contracts. In this design, a single application-agnostic obfuscated circuit, called root iO, suffices to realize arbitrary confidential smart contracts. The outputs of root iO can be verified on-chain at a cost comparable to signature verification, and the obfuscation process can be distributed among multiple parties while remaining secure as long as at least one party is honest. As the second contribution, we outline our roadmap toward a practical implementation of root iO. Assuming that the underlying assumptions of our lattice-based iO construction remain secure, the remaining missing pieces are technically concrete: namely, practical implementations of verifiable FHE and of homomorphic evaluation of a pseudorandom function (PRF) and SNARK verification over key-homomorphic encodings, which together would allow us to implement root iO without incurring prohibitive overhead.
Last updated:  2025-11-29
THF: Designing Low-Latency Tweakable Block Ciphers
Jianhua Wang, Tao Huang, Guang Zeng, Tianyou Ding, Shuang Wu, and Siwei Sun
We introduce the $\textsf{Three-Hash Framework}$ ($\textsf{THF}$), a new instantiation of the $\textsf{LRW+}$ paradigm that employs three hash functions to process tweak inputs. We prove that $\textsf{THF}$ achieves beyond-birthday-bound security under standard assumptions. By extending the general practical cryptanalysis framework to the multiple-tweak setting, we further demonstrate that $\textsf{THF}$ offers balanced resistance to both single- and multiple-tweak attacks, thereby enabling the potential for lower latency compared to existing constructions. Building on this framework, we design $\texttt{Blink}$, a family of tweakable block ciphers optimized for ultra-low latency. $\texttt{Blink}$ features logarithmic-depth Toeplitz-based hashing, which ensures efficient diffusion and scalability with varying tweak lengths. Our cryptanalysis shows that $\texttt{Blink}$ achieves strong security with fewer rounds, while hardware evaluations confirm its superior latency performance. Notably, $\texttt{Blink}$ maintains comparable latency even when the tweak length is doubled, underscoring the scalability advantage of $\textsf{THF}$.
Last updated:  2025-11-28
Multivariate exponential equations with unknown coefficients
Trey Li
We introduce a novel class of equations defined over Euclidean domains. These abstract equations establish a unified framework for deriving new, concrete computational problems useful for cryptography. We prove that solving a single such equation is NP-hard. For systems of these equations, we further prove NP-hardness, average-case hardness, random self-reducibility, search-to-decision reducibility, and trapdoorizability. Based on the hardness of solving these systems, we construct various cryptographic primitives. Our results are proved in an abstract, domain-agnostic manner and hold for a wide range of Euclidean domains. This generality allows the framework to accommodate rich mathematical structures, providing both theoretical depth and flexibility for diverse cryptographic applications.
Last updated:  2025-11-28
Hybrid Subsupport Guessing: A New Hybrid Technique for the Rank Decoding Problem
Hugo Beeloo-Sauerbier Couvée, Antonia Wachter-Zeh, and Violetta Weger
The Rank Decoding Problem (R-DP) has gained a lot of attention due to the competitive KEM proposals ROLLO and RQC, as well as the more recent signature scheme RYDE, the latter being a second-round candidate in the ongoing NIST post-quantum standardization process. While previous attacks on the R-DP are based on combinatorial methods, the seminal work of [Bardet et al., 2020] has shown the potential of attacks that use algebraic modelings, breaking the proposed parameters of ROLLO and RQC. These algebraic attacks model the R-DP as a large system of equations. For most parameter ranges, this system is underdetermined; hence, the algebraic attack first needs to perform several guessing steps to obtain a reduced instance for which the system of equations is overdetermined. These steps, in essence, guess a supersupport of the unknown error support, making this attack a hybrid approach between combinatorial and algebraic solvers. In this paper, we present a novel type of guessing step based on searching a subsupport of the error support. While supersupport guessing only reduces the length and dimension of the code, subsupport guessing instead reduces the length and the rank weight of the sought-after error vector. This introduces an additional method for instance reduction compatible with supersupport guessing. Both types of guessing step can be performed sequentially in hybrid attacks, and their numbers can be optimized to outperform current hybrid attacks. We provide experimentally supported comparisons of the attack complexities with and without the novel guessing technique. We measure the impact of our new hybrid attack on the RYDE parameters; for the NIST security category 5 parameters, we decrease the hybrid MaxMinors attack complexity from 301 bits to 272 bits, outperforming all other known rank decoders and tightening the margin above the 256 threshold. For the other security levels, we decrease the complexities to be on par with the best performing combinatorial decoders.
Last updated:  2025-11-28
1-Adaptive Weak Pseudorandom Functions
Davide Li Calsi, Dominique Schröder, and Julian Thomas
A message authentication code (MAC) ensures authenticity and integrity in symmetric-key settings. The Carter–Wegman–Shoup (CWS) paradigm establishes that MACs for arbitrary-length messages can be built in a black-box way using a single call to a pseudorandom function (PRF) on a random input. More than a decade ago, Dodis, Kiltz, Pietrzak, and Wichs left open whether weak pseudorandom functions (wPRFs) would suffice in this construction. This work establishes tight upper and lower bounds that precisely characterize the minimal computational assumptions needed for the security of the CWS paradigm. On the negative side, we prove that weak PRFs are insufficient to instantiate the CWS paradigm. On the positive side, we introduce a new primitive, the 1-adaptive weak pseudorandom function (1-awPRF), which guarantees pseudorandomness for polynomially many non-adaptive queries followed by one adaptive query. We show that 1-awPRFs are sufficient to secure CWS in a black-box manner. Finally, we construct 1-adaptive weak pseudorandom functions in a black-box way from standard cryptographic assumptions, using a new randomized design paradigm that treats randomization as a fundamental structural element. Instantiating our generic construction under the Decisional Diffie Hellman and Learning with Errors assumptions yields concrete and efficient realizations. These lead to more efficient MAC schemes and illustrate how weak and abstract building blocks can be transformed into stronger and practically useful cryptographic constructions.
Last updated:  2025-11-28
How to Prove Post-Quantum Security for Succinct Non-Interactive Reductions
Alessandro Chiesa, Zijing Di, Zihan Hu, and Yuxi Zheng
Hash-based succinct non-interactive arguments (SNARGs) are widely used in practice, owing to their ease of deployment, notable efficiency, and post-quantum properties. They are constructed via the BCS transformation, which combines an interactive oracle proof (IOP) and a hash-based vector commitment. This success has motivated the study of hash-based succinct non-interactive reductions (SNRDXs), used for recursively ensuring the correctness of distributed computations, by extending the BCS transformation to work with an interactive oracle reduction (IOR) rather than an IOP. We prove that hash-based SNRDXs constructed from IORs are secure in the quantum random oracle model (QROM), provided the IOR satisfies a natural post-quantum analogue of state-restoration security; moreover, we show that (classical) round-by-round security implies post-quantum state-restoration security. Our results thus achieve a post-quantum analogue of the classical security of SNRDXs in the ROM, and generalize a prior result about SNARGs in the QROM to cover recent SNRDXs constructions. Moreover, for SNRDXs we propose and achieve an adaptively-secure straightline quantum extraction property in the QROM, while prior work obtains non-adaptive security for SNARGs in the QROM. Along the way, we develop a modular framework for proving the security of the (extended) BCS transformation based on a new quantum extraction property for vector commitments (which we prove is achieved by Merkle commitments), mirroring classical security analyses and departing from prior "monolithic" post-quantum analyses. This demands a new commutator bound that shows the almost-commutativity between quantum extraction and quantum oracle queries, by bounding a natural classical extraction property.
Last updated:  2025-11-28
Extending and Accelerating Inner Product Masking with Fault Detection via Instruction Set Extension
Songqiao Cui, Geng Luo, Junhan Bao, Josep Balasch, and Ingrid Verbauwhede
Inner product masking is a well-studied masking countermeasure against side-channel attacks. IPM-FD further extends the IPM scheme with fault detection capabilities. However, implementing IPM-FD in software especially on embedded devices results in high computational overhead. Therefore, in this work we perform a detailed analysis of all building blocks for IPM-FD scheme and propose a Masked Processing Unit to accelerate all operations, for example multiplication and IPM-FD specific Homogenization. We can then offload these computational extensive operations with dedicated hardware support. With only $4.05\%$ and $4.01\%$ increase in Look-Up Tables and Flip-Flops (Random Number Generator excluded), respectively, compared with baseline CV32E40p RISC-V core, we can achieve up to $16.55\times$ speed-up factor with optimal configuration. We then practically evaluate the side-channel security via uni- and bivariate Test Vector Leakage Assessment which exhibits no leakage. Finally, we use two different methods to simulate the injected fault and confirm the fault detection capability of up to $k-1$ faults, with $k$ being the replication factor.
Last updated:  2025-11-28
Populating the Zoo of Rugged Pseudorandom Permutations
Jean Paul Degabriele and Vukašin Karadžić
A Rugged Pseudorandom Permutation (RPRP) is a variable-input-length tweakable cipher satisfying a security notion that is intermediate between tweakable PRP and tweakable SPRP. It was introduced at CRYPTO 2022 by Degabriele and Karadžić, who additionally showed how to generically convert such a primitive into nonce-based and nonce-hiding AEAD schemes satisfying either misuse-resistance or release-of-unverified-plaintext security as well as Nonce-Set AEAD which has applications in protocols like QUIC and DTLS. Their work shows that RPRPs are powerful and versatile cryptographic primitives. However, the RPRP security notion itself can seem rather contrived, and the motivation behind it is not immediately clear. Moreover, they only provided a single RPRP construction, called UIV, which puts into question the generality of their modular approach and whether other instantiations are even possible. In this work, we address this question positively by presenting new RPRP constructions, thereby validating their modular approach and providing further justification in support of the RPRP security definition. Furthermore, we present a more refined view of their results by showing that strictly weaker RPRP variants, which we introduce, suffice for many of their transformations. From a theoretical perspective, our results show that the well-known three-round Feistel structure achieves stronger security as a permutation than a mere pseudorandom permutation---as was established in the seminal result by Luby and Rackoff. We conclude on a more practical note by showing how to extend the left domain of one RPRP construction for applications that require larger values in order to meet the desired level of security.
Last updated:  2025-11-28
Hardness and Algorithms for Batch LPN under Dependent Noise
Xin Li, Songtao Mao, and Zhaienhe Zhou
We study the Batch Learning Parity with Noise (LPN) variant, where the oracle returns $k$ samples in a batch, and draws the noise vector from a joint noise distribution $\mathcal{D}$ on $\mathbb{F}_2^k$ (instead of i.i.d.). This model captures a broad range of correlated or structured noise patterns studied in cryptography and learning theory, and was formally defined in recent work by Golowich, Moitra, and Rohatgi (FOCS 2024). Consequently, understanding which distributions preserve the hardness of LPN has become an important question. On the hardness side, we design several reductions from standard LPN to Batch LPN. Our reductions provide a more comprehensive characterization of hard distributions. Specifically, we show that a Batch LPN instance is as hard as standard LPN with noise rate $\eta:=\frac{1}{2}-\varepsilon$ provided that its noise distribution $\mathcal{D}$ satisfies one of the following: 1. The noise distribution $\mathcal{D}$ satisfies a mild Fourier-analytic condition (specifically, $\sum_{s\neq 0}|\widehat{P}_{\mathcal{D}}(s)|\le 2\varepsilon$). 2. The noise distribution $\mathcal{D}$ is $\Omega(\eta \cdot k 2^{-k})$-dense (i.e., every error pattern occurs with probability at least $\Omega(\eta \cdot k 2^{-k})$) for $\eta < 1/k$. 3. The noise distribution $\mathcal{D}$ is a $\delta$-Santha-Vazirani source. Our reduction improves the allowable bias $\delta$ from $O(2^{-k}\varepsilon)$ (in Golowich et al.) to $O(2^{-k/2}\varepsilon)$. On the algorithmic side, we design an algorithm for solving Batch LPN whenever the noise distribution assigns sufficiently small probability to at least one point, which gives an algorithm--hardness separation for Batch LPN. Our algorithm can be seen as an extension of Arora and Ge's (ICALP 2011) linearization attack. Our reduction is based on random affine transformations, developed and analyzed through the lens of Fourier analysis, providing a general framework for studying various LPN variants.
Last updated:  2025-11-28
A Lattice-Based IND-CCA Threshold KEM from the BCHK+ Transform
Oleksandra Lapiha and Thomas Prest
We present a simple IND-CCA lattice-based threshold KEM. At a high level, our design is based on the BCHK transform (Canetti et al., EUROCRYPT 2004), which we adapt to the lattice setting by combining it with the FO transform (Fujisaki and Okamoto, PKC 1999) in order to achieve decryption consistency. As for the BCHK transform, our construction requires a threshold identity-based encryption (TIBE) scheme with suitable properties. We build such an IBE by combining the ABB IBE (Agrawal, Boneh, Boyen, EUROCRYPT 2010) with recent advances in lattice threshold cryptography, such as the threshold-friendly signature Plover (Esgin et al., EUROCRYPT 2024) and a variant of the Threshold Raccoon scheme (Katsumata et al., CRYPTO 2024). The security proof of our scheme relies on a new assumption which we call the Coset-Hint-MLWE assumption, and which is a natural generalisation of the Hint-MLWE assumption (Kim et al., CRYPTO 2023). We prove the hardness of Coset-Hint-MLWE under standard assumptions. We believe this new assumption may be of independent interest. Unlike prior works on IND-CCA lattice-based threshold KEMs, our construction only relies on simple algorithmic tools and does not use heavy machinery such as multi-party computation or threshold fully homomorphic encryption.
Last updated:  2025-11-28
Revisiting Linkable Ring Signatures with Logarithmic Verification Complexity
Danai Balla and Pyrros Chaidos
Ring Signatures allow a user to sign on behalf of an ad-hoc set of public keys, while hiding their identity inside that set. Linkable Ring Signatures (LRS) add the functionality of detecting signatures originating from the same signer. They have found many applications in anonymous transactions and e-voting. The LLRing family of linkable ring signature schemes by Hui and Chau (ESORICS 2024) is one of the more efficient LRS schemes. However, we show that it has an unlinkability vulnerability, meaning an adversary can create more unlinkable signatures than the number of secret keys they own. The vulnerability is caused by the introduction of unwanted structure to base elements used in proofs. We also find a similar attack against the Threshold Ring Referral (TRR) scheme of Ta, Hui, and Chau (Security and Privacy 2025), rendering it unsound. We show how to achieve strong linkability with logarithmic verification complexity in the pairing based setting by first reverting the unsafe construction of base elements, and by also adjusting the arguments of knowledge used in order to maintain efficiency. Concretely, by modifying the Dory argument to fit our scheme we are able to match the performance of LLRing-P. We separate the design and analysis of the scheme from the instantiation of the knowledge arguments, which helps prevent unwanted interactions between the two, and can provide easier upgrades to more efficient proof systems.
Last updated:  2025-11-28
Hard Instances of Discrete Logarithm Problem and Cryptographic Applications
Christopher Battarbee, Arman Darbinyan, and Delaram Kahrobaei
Let f be an arbitrary positive integer valued function. The goal of this note is to show that one can construct a finitely generated group in which the discrete log problem is polynomially equivalent to computing the function f. In particular, we provide infinite, but finitely generated groups, in which the discrete logarithm problem is arbitrarily hard. As another application, we construct a family of two-generated groups that have polynomial time word problem and NP-complete discrete log problem. Additionally, using our framework, we propose a generic scheme of cryptographic protocols, which might be of independent interest.
Last updated:  2025-11-28
Correction-Based Fault Attack Against Randomized MAYO
Mohamed Abdelmonem, Lejla Batina, Durba Chatterjee, and Håvard Raddum
This paper introduces a novel and practical fault injection attack targeting the randomized version of the MAYO post-quantum signature scheme. While prior attacks on MAYO either relied on deterministic signing modes or specific memory assumptions, our attack succeeds without such constraints. By exploiting the inherent structural properties of MAYO signatures, we combine targeted fault injections with signature correction techniques to extract partial information about the secret oil space. By systematically accumulating such partial information across multiple fault-induced signatures and utilizing linear dependencies among oil vectors, we present an efficient method for achieving full secret key recovery. The attack requires only one fault injection per oil coefficient, repeated a small (i.e., 8, 17, 10, or 12 for the different MAYO versions, respectively) number of times. We demonstrate the targeted fault injection attack on a MAYO implementation on an ARM Cortex-M3 processor via clock glitching, establishing the feasibility of the attack in practice. Our approach is validated through simulations, and a detailed computational cost analysis is provided. Additionally, we demonstrate the ineffectiveness of some previously proposed countermeasures against our attack, thereby highlighting the urgent need for developing more robust protection mechanisms for multivariate post-quantum signature schemes, such as MAYO.
Last updated:  2025-11-28
Computational Robust (Fuzzy) Extractors for CRS-dependent Sources with Minimal Min-entropy
Hanwen Feng and Qiang Tang
Robust (fuzzy) extractors are very useful for, e.g., authenticated exchange from shared weak secret and remote biometric authentication against active adversaries. They enable two parties to extract the same uniform randomness with the ``helper'' string. More importantly, they have an authentication mechanism built in that tampering of the ``helper'' string will be detected. Unfortunately, as shown by Dodis and Wichs, in the information-theoretic setting, a robust extractor for an $(n,k)$-source requires $k>n/2$, which is in sharp contrast with randomness extractors which only require $k=\omega(\log n)$. Existing work either relies on random oracles or introduces CRS and works only for CRS-independent sources (even in the computational setting). In this work, we give a systematic study of robust (fuzzy) extractors for general CRS-dependent sources. We show in the information-theoretic setting, the same entropy lower bound holds even in the CRS model; we then show we can have robust extractors in the computational setting for general CRS-dependent source that is only with minimal entropy. At the heart of our construction lies a new primitive called $\kappa$-MAC that is unforgeable with a weak key and hides all partial information about the key (both against auxiliary input), by which we can compile any conventional randomness extractor into a robust one. We further augment $\kappa$-MAC to defend against ``key manipulation" attacks, which yields a robust fuzzy extractor for CRS-dependent sources.
Last updated:  2025-11-28
Traceable Secret Sharing Revisited
Vipul Goyal, Abhishek Jain, and Aditi Partap
In a secret sharing scheme for a monotone access structure $\mathcal{A}$, one can share a secret among a set of parties such that all subsets of parties authorized by $\mathcal{A}$ can reconstruct the secret while all other subsets learn nothing. However, what if an unauthorized subset of parties collude and offer their shares for sale? Specifically, suppose that the parties pool their shares to create a reconstruction box that reveals the secret upon receiving enough additional shares as input. To deter this behavior, Goyal et al. (CRYPTO'21) introduced the notion of traceable secret sharing (TSS), where it is possible to provably trace reconstruction boxes containing leaked secret shares back to their respective parties. Goyal et al. and subsequent work presented definitions and constructions of TSS for the threshold access structure. In this work, we revisit the notion of TSS. - We identify shortcomings in previous formulations of TSS and present new, strengthened definitions that are not achieved by known constructions. We show that it is easy to build reconstruction boxes for which the tracing guarantees of all previous works fail. - We extend the study of TSS beyond threshold to general access structures. Our new definitions are, in fact, necessary for this setting as natural adaptations of existing definitions become vacuous for general access structures. - We present new constructions of TSS that satisfy our definitions for all access structures captured by monotone circuits. One of our constructions relies solely on one-way functions while the other additionally relies on indistinguishability obfuscation.
Last updated:  2025-11-28
You Only Decapsulate Once: Ciphertext-Independent Single-Trace Passive Side-Channel Attacks on HQC
Zhenzhi Lai, Ruiyi Zhang, Zhiyuan Zhang, Julius Hermelink, Michael Schwarz, Van-Thuan Pham, and Udaya Parampalli
Hamming Quasi-Cyclic (HQC) has recently been selected by NIST, after the Round 4 submission, as a postquantum key encapsulation mechanism (KEM) standard and will soon be widely deployed. Therefore, it is important to ensure its implementation is constant-time, i.e., resistant to side-channel attacks. Existing timing attacks on HQC exploit non-constant-time source code and the decryption that is vulnerable to chosen-ciphertext attacks. These active attacks require constructing thousands of invalid ciphertexts, and thus, they can be easily detected. The latest HQC implementation has mitigated all these attacks by making its source code constant-time. In this work, we provide a new perspective on reviewing the implementation of HQC and exploiting timing leakages. For the first time, we show that an attacker can recover the secret key of HQC without targeting the CCA-insecure decryption and internal states of message decryption. Specifically, an attacker can exploit the timing leakages that occur when processing sparse vectors, which are ciphertext-independent, to recover the secret key by measuring the leakages only once. We find two such timing leakages in the latest stable HQC implementation, supposedly constant-time, and practically extract the leakages even when the process is protected by AMD Secure Encryption Virtualization. We also show that a power side-channel can extract similar leakages on embedded devices. Our findings apply to all code-based KEMs that are submitted to the NIST Round 4 PQC submission. We show that an attacker can also perform similar passive attacks to recover the session key of BIKE and Classic McEliece. To help write constant-time code, we propose and test a workflow that uses CT-grind when developing the code. We find that CT-grind can effectively find all timing leakages in various implementations of HQC. Therefore, we suggest that cryptographic developers constantly use constant-time analysis tools when developing code.
Last updated:  2025-11-28
Attacks and Remedies for Randomness in AI: Cryptanalysis of PHILOX and THREEFRY
Jens Alich, Thomas Eisenbarth, Hossein Hadipour, Gregor Leander, Felix Mächtle, Yevhen Perehuda, Shahram Rasoolzadeh, Jonas Sander, and Cihangir Tezcan
In this work, we address the critical yet understudied question of the security of the most widely deployed pseudorandom number generators (PRNGs) in AI applications. We show that these generators are vulnerable to practical and low-cost attacks. With this in mind, we conduct an extensive survey of randomness usage in current applications to understand the efficiency requirements imposed in practice. Finally, we present a cryptographically secure and well-understood alternative, which has a negligible effect on the overall AI/ML workloads. More generally, we recommend the use of cryptographically strong PRNGs in all contexts where randomness is required, as past experience has repeatedly shown that security requirements may arise unexpectedly even in applications that appear uncritical at first.
Last updated:  2025-11-27
MacaKey: Full-State Keyed Sponge Meets the Summation-Truncation Hybrid
Uncategorized
Charlotte Lefevre and Mario Marhuenda Beltrán
Show abstract
Uncategorized
The keyed sponge construction has benefited from various efficiency advancements over time, most notably leading to the possibility to absorb over the entire state, as in the full-state keyed sponge. However, squeezing has always remained limited to blocks smaller than the permutation size, as security is determined by the capacity c, the size of the non-squeezed state. In this work, we present Macakey, an improved version of the full-state keyed sponge that not only absorbs over the entire state but also squeezes over the entire state. The scheme combines ideas of the full-state keyed sponge with those of the summation-truncation hybrid of Gunsing and Mennink. We demonstrate that, with no sacrifice in generic security and with only using c bits of extra storage, Macakey can significantly boost performance, particularly in scenarios requiring large amounts of output. For example, using the 320-bit Ascon permutation with a 256-bit capacity, Macakey outputs five times as many bits as the full-state keyed sponge.
Last updated:  2025-11-27
Pairing-Based SNARGs with Two Group Elements
Gal Arnon, Jesko Dujmovic, and Eylon Yogev
SNARGs are cryptographic primitives that allow a prover to demonstrate membership in an NP language while sending a proof that is much smaller than the witness. In this work, we focus on the succinctness of publicly-verifiable group-based SNARGs, analyzed in a model that combines both a generic bilinear group $(\mathbb{G}_{1} \times \mathbb{G}_{2} \to \mathbb{G}_{T})$ and a random oracle (the GGM + ROM). We construct the first publicly-verifiable SNARG in the GGM + ROM where the proof consists of exactly $2$ elements of $\mathbb{G}_{1}$ and no additional bits, achieving the smallest proof size among all known publicly verifiable group-based SNARGs. Our security analysis is tight, ensuring that the construction incurs no hidden security losses. Concretely, when instantiated with the BLS12-381 curve for 128-bit security, our scheme yields a proof size of $768$ bits, nearly a $2\times$ improvement over the best known pairing-based SNARG. While our scheme is not yet concretely efficient, it demonstrates the feasibility of ultra-short proofs and opens the door to future practical instantiations. Complementing this construction, we establish a new lower bound for group-based SNARGs. We prove that under mild and natural restrictions on the verifier (which are satisfied by all known schemes) no SNARG exists in the Maurer GGM + ROM with a proof that consists of a single group element (assuming one-way functions). This substantially strengthens the lower bound of Groth, which was more restrictive and did not extend to settings with a random oracle.
Last updated:  2025-11-27
One Fell Swoop: A Single-Trace Key-Recovery Attack on the Falcon Signing Algorithm
Kang Li, Shouran Ma, Haochen Dou, and Qian Guo
Falcon, a lattice-based signature scheme selected for NIST post-quantum standardization, is notable for its compact signature size alongside a complex signing procedure involving extensive floating-point arithmetic. Prior side-channel attacks on Falcon, while demonstrating vulnerabilities, have consistently required a large number of power traces for successful key recovery; this critical efficiency gap means previously reported attacks are often impractical in real-world scenarios where trace collection is limited. This paper presents a new single-trace attack on the Falcon. We identify and exploit novel leakage points within the floating-point conversion and Fast Fourier Transform (FFT) routines during the secret key expansion, which allow us to progressively partition the possible values of the secret key coefficients. By identifying a sufficient number of these coefficients, we establish a system of linear equations that can be solved to recover the entire secret key. Our attack is particularly critical for the \texttt{sign\_dyn} design---the memory-efficient implementation adopted in important cryptographic libraries and reference implementations---as it executes key expansion during every signature operation. We emphasize that this is the \textbf{first single-trace attack on the Falcon signing procedure itself}, providing a more compelling threat scenario than previous work. We validate our attack on an ARM Cortex-M4 microcontroller, demonstrating a 100\% key recovery success rate with just a single power trace for both Falcon-512 and Falcon-1024 in both signing designs—\texttt{sign\_tree} and \texttt{sign\_dyn}, compiled at the \texttt{-O0} level. While the \texttt{-O3} optimization level mitigates some leakages, our multi-trace attack remains effective in the practically used \texttt{sign\_dyn} design, recovering 80 out of 100 Falcon-512 keys with only 5 traces. Our findings expose a critical implementation vulnerability in Falcon, highlighting the urgent necessity of integrating countermeasures to protect Falcon in real-world applications.
Last updated:  2025-11-27
Taming the Stack: Proof-Preserving Blockwise FrodoKEM on RISC-V Devices with Hardware Acceleration
Frank Hartmann
FrodoKEM provides conservative post-quantum security through unstructured lattices, yet its deployment on embedded systems is historically constrained by high memory requirements. While state-of-the-art implementations mitigate this by generating the public matrix on-the-fly, they remain bottlenecked by the sequential generation of secret matrices, which enforces a rigid trade-off between stack usage and recomputation overhead. To address this, we propose a blockwise secret generation mechanism that enables efficient random access to arbitrary matrix tiles. We formally prove that this modification is indistinguishable from the standard specification in the Random Oracle Model, thereby preserving IND-CCA2 security. By evaluating this approach on a 32-bit RISC-V core with custom cryptographic extensions, we demonstrate tiled encapsulation and key generation strategies that maintain constant transient stack usage ($\approx$2--3 kB) across all parameter sets. This design effectively decouples memory footprint from algorithmic complexity, enabling flexible buffering strategies that optimize performance according to the available heap memory of the target platform.
Last updated:  2025-11-27
On Maximum Size Simultaneous Linear Approximations in Ascon and Keccak and Related Translation and Differential Properties
Nicolas T. Courtois, Frédéric Amiel, and Alexandre Bonnard de Fonvillars
In this paper we study the S-box known as Chi or \chi initially proposed by Daemen in 1995 and very widely used ever since in Keccak, Ascon, and many other. This type of ciphers is typically analyzed [in recent research] in terms of subspace trail attacks [TeDi19] and vector space invariants. An interesting question is then, when different spaces are mapped to each other by translations with a constant. In this paper we relax this fundamental question and we consider arbitrary sets of points and their translations. We generalize previous S-box partial linearization results on Keccak and Ascon from Eurocrypt 2017. We basically introduce new ways to linearize the Ascon S-box to the maximum possible extent. On this basis we show further remarkable properties and some surprising connections between [simultaneous] linear and [prominent] differential properties. We exhibit a new family of maximum size and optimal approximation properties with 11 points, beyond the maximum size of any set in the DDT table. We prove a theorem which guarantees that this type of properties are stable by arbitrary input-side translations which holds for all quadratic permutations. All this will be placed in the context of previous work on classification of 5-bit quadratic permutations.
Last updated:  2025-11-27
IP Masking with Generic Security Guarantees under Minimum Assumptions, and Applications
Sebastian Faust, Loïc Masure, Elena Micheli, Hai Hoang Nguyen, Maximilian Orlt, and François-Xavier Standaert
Leakage-resilient secret sharing is a fundamental building block for securing implementations against side-channel attacks. In general, such schemes correspond to a tradeoff between the complexity of the resulting masked implementations, their security guarantees and the physical assumptions they require to be effective. In this work, we revisit the Inner-Product (IP) framework, where a secret \(y\) is encoded by two vectors \((\vec{\omega},\vec{y})\), such that their inner product is equal to \(y\). So far, the state of the art is split in two. On the one hand, the most efficient IP masking schemes (in which \(\vec{\omega}\) is public but random) are provably secure with the same security notions (i.e., in the abstract probing model) as Boolean masking, yet at the cost of a slightly more expensive implementation. Hence, their theoretical interest and practical relevance remain unclear. On the other hand, the most secure IP masking schemes (in which \(\vec{\omega}\) is secret) lead to expensive implementations. We improve this state of the art by investigating the leakage resilience of IP masking with public \(\vec{\omega}\) coefficients in the bounded leakage model, which depicts well implementation contexts where the physical noise is negligible. Furthermore, we do that without assuming independent leakage from the shares, which may be challenging to enforce in practice. In this model, we show that if \(m\) bits are leaked from the \(d\) shares \(\vec{y}\) of the encoding over an \(n\)-bit field, then, with probability at least \(1 - 2^{-\lambda}\) over the choice of \(\vec{\omega}\), the scheme is \(\mathcal{O}{(\sqrt{2^{-(d-1)\cdot n + m + 2\lambda}}})\)-leakage resilient. We additionally show that in large Mersenne-prime fields, a wise choice of the public coefficients \(\vec{\omega}\) can yield leakage resilience up to \(\mathcal{O}({n \cdot 2^{-d \cdot n +n + d}})\), in the case where one physical bit from each share is revealed to the adversary. The exponential rate of the leakage resilience we put forward significantly improves upon previous bounds in additive masking, where the past literature exhibited a constant exponential rate only. We additionally discuss the applications of our results, and the new research challenges they raise.
Last updated:  2025-11-27
Cheap Data Integrity Without Consensus
Orfeas Stefanos Thyfronitis Litos, Zhaoxuan Wu, Alfredo Musumeci, Songyun Hu, James Helsby, Michael Breza, and William Knottenbelt
Blockchains enable decentralised applications that withstand Byzantine failures and do not need a central authority. Unfortunately, their massive replication requirements preclude their use on constrained devices. We propose a novel blockchain-based data structure which forgoes replication without affecting the append-only nature of blockchains, making it suitable for maintaining data integrity over networks of storage-constrained devices. Our solution does not provide consensus, which is not required by our motivating application, namely securely storing sensor data of containers in cargo ships. We elucidate the practical promise of our technique by following a multi-faceted approach: We (i) formally prove the security of our protocol in the Universal Composition (UC) setting, as well as (ii) provide a small-scale proof-of-concept implementation, (iii) a performance simulation for large-scale deployments which showcases a reduction in storage of more than $1000$x compared to traditional blockchains, and (iv) a resilience simulation that predicts the practical effects of network jamming attacks.
Last updated:  2025-11-27
Multi-Verifier Keyed-Verification Anonymous Credentials
Jan Bobolz, Emad Heydari Beni, Anja Lehmann, Omid Mirzamohammadi, Cavit Özbay, and Mahdi Sedaghat
Keyed-Verification anonymous credentials (KVAC) enable privacy-preserving authentication and can be seen as the symmetric primitive of conventional anonymous credentials: issuance and verification of credentials requires a shared secret key. The core advantage of KVACs is that they can be realized without pairings, which still appears to be a significant bottleneck when it comes to real-world adoption. KVACs provide all the benefits from anonymous credentials, in particular multi-show unlinkability, but only work in the setting where the issuer and verifier are the same entity, limiting the applications they can be used in. In this work we extend the idea of keyed-verification credential to a setting where again multiple verifiers are supported, each sharing an individual secret key with the issuer. We formally introduce this as multi-verifier keyed-verification anonymous credentials (mKVACs). While users must now get verifier-specific credentials, each credential still provides multi-show unlinkability. In terms of security, mKVAC naturally strengthens the single-verifier variant, as it guarantees that corruption of any verifier does not impact unforgeability guarantees for other verifiers. The main challenge therein is to not trade this added flexibility for privacy, and hide the verifier's identity in the credential issuance. We provide formal definitions of all desired security and privacy features and propose a provably secure and pairing-free construction. Along the way, we develop a new KVAC-like primitive that authenticates group elements and offers statistical privacy, solving the open problem of combining multi-verifier support and pairing-freeness. Finally, we demonstrate practicality of our protocol via implementation benchmarks.
Last updated:  2025-11-27
Anamorphic Resistant Encryption: the Good, the Bad and the Ugly
Davide Carnemolla, Dario Catalano, Emanuele Giunta, and Francesco Migliaro
Anamorphic encryption (AE), introduced by Persiano, Phan and Yung at Eurocrypt `22, allows to establish secure communication in scenarios where users might be forced to hand over their decryption keys to some hostile authority. Over the last few years, several works have improved our understanding of the primitive by proposing novel realizations, new security notions and studying inherent limitations. This work makes progress, mainly, on this last line of research. We show concrete realizations of public key encryption schemes that, provably, cannot be turned anamorphic. These were called Anamorphic Resistant Encryption (ARE, fort short) in a recent work of Dodis and Goldin. We also show that, under certain conditions, anamorphic encryption is equivalent to algorithm substitution attacks. This allows to positively reinterpret our AREs as PKE schemes provably resistant to subversion attacks. To the best of our knowledge, these seem to be the first IND-CPA secure schemes achieving subversion resistance without trust assumptions or non-black-box decomposition techniques. Our two AREs heavily rely, among other things, on a direct usage of extremely lossy functions: here the lossyness property is used in the constructions, rather than just in the proofs. The first construction is in the public parameters model and also requires iO. The second construction eliminates the need of both public parameters and iO, but is in the random oracle and relies on the novel concept of robust extremely lossy functions with group structure, a primitive that we define and (show how to) realize in this paper.
Last updated:  2025-11-27
Quantum Circuit Synthesis for AES with Low DW-cost
Haoyu Liao and Qingbin Luo
Symmetric cryptography is confronting threats posed by quantum computing, including Grover's search algorithm and Simon's algorithm. In the fault-tolerant quantum computation, the limited qubit count, connectivity constraints, and error rates of quantum hardware impose stringent requirements on the implementation of cryptographic quantum circuits. Constructing low-resource quantum circuit models forms the foundation for evaluating algorithmic resistance to quantum threats. At CRYPTO 2019, Jaques et al. justified the adoption of depth-times-width cost (DW-cost) as a metric for quantum circuits by incorporating advancements in quantum computation and error correction. In this work, we address the fundamental limitations in in-place implementations of AES quantum circuits by proposing a set of in-place synthesis methods centered on DW-cost optimization. First, we prove that within the composite field arithmetic framework, intermediate circuit states can be utilized to uncompute S-box input states, and introduce a novel design pathway and circuit structure for in-place S-box quantum circuits. Second, we establish the necessary conditions for maximizing parallelization of Toffoli gates under minimal-width constraints in binary field multiplication. Through co-design and optimization of multiple nonlinear components, we construct a compact in-place S-box with a DW-cost of merely 276. Finally, building on this, we achieve quantum circuit implementations for AES-128, AES-192, and AES-256 via co-optimization of key expansion and round functions, reducing their DW-cost values to 65,280, 87,552, and 112,896 respectively. These results indicate a reduction of at least 46%, 45%, and 45% compared to existing state-of-the-art solutions. This study establishes new technical benchmarks for low-resource fault-tolerant implementations of symmetric cryptography in the post-quantum era. (This is a revised version containing Clifford+T resource estimates for both AES-128 Grover oracle and encryption oracle. Key revisions are highlighted in red.)
Last updated:  2025-11-27
Approximate CRT-Based Gadget Decomposition and Application to TFHE Blind Rotation
Olivier Bernard and Marc Joye
One of the main issues to deal with for fully homomorphic encryption is the noise growth when operating on ciphertexts. To some extent, this can be controlled thanks to a so-called gadget decomposition. A gadget decomposition typically relies on radix- or CRT-based representations to split elements as vectors of smaller chunks whose inner products with the corresponding gadget vector rebuilds (an approximation of) the original elements. Radix-based gadget decompositions present the advantage of also supporting the approximate setting: for most homomorphic operations, this has a minor impact on the noise propagation but leads to substantial savings in bandwidth, memory requirements and computational costs. A typical use-case is the blind rotation as used for example in the bootstrapping of the TFHE scheme. On the other hand, CRT-based representations are convenient when machine words are too small for directly accommodating the arithmetic on large operands. This arises in two typical cases: (i) in the hardware case with multipliers of restricted size, e.g., 17 bits; (ii) in the software case for ciphertext moduli above, e.g., 128 bits. This paper presents new CRT-based gadget decompositions for the approximate setting, which combines the advantages of non-exact decompositions with those of CRT-based decompositions. Significantly, it enables certain hardware or software realizations otherwise hardly supported like the two aforementioned cases. In particular, we show that our new gadget decompositions provide implementations of the (programmable) bootstrapping in TFHE relying solely on native arithmetic and offering extra degrees of parallelism.
Last updated:  2025-11-26
A New Approach to Arguments of Quantum Knowledge
James Bartusek, Ruta Jawale, Justin Raizes, and Kabir Tomer
We construct a publicly-verifiable non-interactive zero-knowledge argument system for QMA with the following properties of interest. 1. Transparent setup. Our protocol only requires a uniformly random string (URS) setup. The only prior publicly-verifiable NIZK for QMA (Bartusek and Malavolta, ITCS 2022) requires an entire obfuscated program as the common reference string. 2. Extractability. Valid QMA witnesses can be extracted directly from our accepting proofs. That is, we obtain a publicly-verifiable non-interactive argument of quantum knowledge, previously only known in a privately-verifiable setting (Coladangelo, Vidick, and Zhang, CRYPTO 2020). Our construction introduces a novel ZX QMA verifier with "strong completeness" and builds upon the coset state authentication scheme from (Bartusek, Brakerski, and Vaikuntanathan, STOC 2024) within the context of QMA verification. Along the way, we establish new properties of the authentication scheme. The security of our construction rests on the heuristic use of a post-quantum indistinguishability obfuscator. Rather than rely on the full-fledged classical oracle model (i.e. ideal obfuscation), we isolate a particular game-based property of the obfuscator that suffices for our proof, which we dub the evasive composability heuristic. As an additional contribution, we study a general method for replacing heuristic use of obfuscation with heuristic use of hash functions in the post-quantum setting. In particular, we establish security of the ideal obfuscation scheme of Jain, Lin, Luo, and Wichs (CRYPTO 2023) in the quantum pseudorandom oracle model (QPrO), which can be heuristically instantiated with a hash function. This gives us NIZK arguments of quantum knowledge for QMA in the QPrO, and additionally allows us to translate several quantum-cryptographic results that were only known in the classical oracle model to results in the QPrO.
Last updated:  2025-11-26
Privacy-preserving in cloud networks: An efficient, revocable and authenticated encrypted search scheme
Yibo Cao, Shiyuan Xu, Gang Xu, Xiu-Bo Chen, Yuling Chen, and Siu-Ming Yiu
With the widespread development of cloud networks, performing searches on encrypted data (without decryption) has become a critical issue. Public key authenticated encryption with keyword search (PAEKS) allows for the retrieval of encrypted data while resisting insider keyword guessing attacks (IKGAs). Most PAEKS schemes do not support access control in multi-receiver models. To address this limitation, attribute-based encryption has been introduced. However, the access privileges for the ciphertext may change, and it is vulnerable to quantum computing attacks, which limit its applicability and compromise security in cloud networks. In this paper, we propose RABAEKS, the first revocable and authenticated attribute-based encrypted search scheme over lattice in cloud networks. Our design allows the cloud server to enforce access control for data receivers during the search process. For practical implementation, we introduce a revocation mechanism for receivers, enabling dynamic access control. We then formalize and analyze the security of our scheme rigorously. Through performance evaluations and comparisons, we demonstrate that the search time, ciphertext size, and trapdoor size of our RABAEKS scheme are independent of the number of keywords. Additionally, the computational overhead for ciphertext generation, trapdoor generation, and the search phase in RABAEKS is, at most, 0.91%, 39.21%, and 80.95% of that of previous approaches, respectively, indicating it is efficient and practical in cloud networks.
Last updated:  2025-11-26
Unbounded Dynamic Predicate Compositions in ABE from Standard Assumptions
Nuttapong Attrapadung and Junichi Tomida
At Eurocrypt'19, Attrapadung presented several transformations that dynamically compose a set of attribute-based encryption (ABE) schemes for simpler predicates into a new ABE scheme for more expressive predicates. Due to the powerful unbounded and modular nature of his compositions, many new ABE schemes can be obtained in a systematic manner. However, his approach heavily relies on $q$-type assumptions, which are not standard. Devising such powerful compositions from standard assumptions was left as an important open problem. In this paper, we present a new framework for constructing ABE schemes that allow unbounded and dynamic predicate compositions among them, and show that the adaptive security of these composed ABE will be preserved by relying only on the standard matrix Diffie-Hellman (MDDH) assumption. This thus resolves the open problem posed by Attrapadung. As for applications, we obtain various ABEs that are the first such instantiations of their kinds from standard assumptions.These include the following adaptively secure large-universe ABEs for Boolean formulae under MDDH: - The first completely unbounded monotone key-policy (KP)/ciphertext-policy (CP) ABE. Such ABE was recently proposed, but only for the KP and small-universe flavor (Kowalczyk and Wee, Eurocrypt'19). - The first completely unbounded non-monotone KP/CP-ABE. Especially, our ABEs support a new type of non-monotonicity that subsumes previous two types of non-monotonicity, namely, by Ostrovsky et al. (CCS'07) and by Okamoto and Takashima (CRYPTO'10). - The first (non-monotone) KP and CP-ABE with constant-size ciphertexts and secret keys, respectively. - The first KP and CP-ABE with constant-size secret keys and ciphertexts, respectively. At the core of our framework lies a new partially symmetric design of the core 1-key 1-ciphertext oracle component called Key Encoding Indistinguishability, which exploits the symmetry so as to obtain compositions.
Last updated:  2025-11-26
Optimal Threshold Traitor Tracing
Sourav Das, Pratish Datta, Aditi Partap, Swagata Sasmal, and Mark Zhandry
Threshold encryption distributes decryption capability across $n$ parties such that any $t$ of them can jointly decrypt a ciphertext, while smaller coalitions learn nothing. However, once $t$ or more parties collude, traditional threshold schemes provide no accountability: a coalition of $t$ or more parties can pool its keys into a pirate decoder that enables unrestricted decryption, all without any risk of being exposed. To address this, Boneh, Partap, and Rotem [CRYPTO '24] introduced threshold traitor tracing (TTT), which equips threshold encryption with traceability. Yet, all known TTT schemes either suffer from parameter sizes growing with at least $n^{1/3}$, or rely on indistinguishability obfuscation to achieve optimal parameters. In this paper, we present the first TTT schemes with optimal parameters, where public keys, secret keys, and ciphertexts are all bounded by ${\sf poly}(\lambda,\log n)$, built solely from standard cryptographic tools and assumptions. Our first construction relies on the decisional Bilinear Diffie–Hellman (DBDH) assumption in prime order bilinear groups. Our second construction, based on the Learning with Errors (LWE) assumption, is plausibly post-quantum secure, and supports ramp-thresholds where decryption requires a larger coalition than those tolerated by security. Both of our constructions provide traceability against coalitions of arbitrary sizes. To achieve these results, we introduce a new primitive, Attribute-Based Threshold Encryption (ABTE), which generalizes both threshold and attribute-based encryption. We then combine ABTE with Mixed Functional Encryption through a new compiler to obtain our TTT schemes. We believe ABTE is a powerful primitive that may have independent applications beyond optimal TTT.
Last updated:  2025-11-26
Semigroup-homomorphic Signature
Heng Guo, Kun Tian, Fengxia Liu, and Zhiyong Zheng
In 2002, Johnson et al. posed an open problem at the Cryptographers' Track of the RSA Conference: how to construct a secure homomorphic signature on a semigroup, rather than on a group. In this paper, we introduce, for the first time, a semigroup-homomorphic signature scheme. Under certain conditions, we prove that the security of this scheme is based on the hardness of the Short Integer Solution (SIS) problem and is tightly secure. Furthermore, we extend it to a linearly semigroup-homomorphic signature scheme over lattices, and this scheme can also ensure privacy.
Last updated:  2025-11-25
Reductions Between Code Equivalence Problems
Mahdi Cheraghchi, Nikhil Shagrithaya, and Alexandra Veliche
In this paper we present two reductions between variants of the Code Equivalence problem. We give polynomial-time Karp reductions from Permutation Code Equivalence (PCE) to both Linear Code Equivalence (LCE) and Signed Permutation Code Equivalence (SPCE). Along with a Karp reduction from SPCE to the Lattice Isomorphism Problem (LIP) proved in a paper by Bennett and Win (2024), our second result implies a reduction from PCE to LIP.
Last updated:  2025-11-25
Sum-check protocol for approximate computations
Dor Bitan, Zachary DeStefano, Shafi Goldwasser, Yuval Ishai, Yael Tauman Kalai, and Justin Thaler
Motivated by the mismatch between floating-point arithmetic, which is intrinsically approximate, and verifiable computing protocols for exact computations, we develop a generalization of the classical sum-check protocol. Our generalization proves claims of the form $\sum_{x \in \{0,1\}^v} g(x) \approx H$, where $g$ is a low-degree $v$-variate polynomial over an integral domain $\mathbb{U}$. The verifier performs its check in each round of the protocol using a tunable error parameter $\delta$. If $\Delta$ is the error in the prover's initial claim, then the soundness error of our protocols degrades gracefully with $\delta/\Delta$. In other words, if the initial error $\Delta$ is large relative to $\delta$, then the soundness error is small, meaning the verifier is very likely to reject. Unlike the classical sum-check protocol, which is fundamentally algebraic, our generalization exploits the metric structure of low-degree polynomials. The protocol can be instantiated over various domains, but is most natural over the complex numbers, where the analysis draws on the behavior of polynomials over the unit circle. We also analyze the protocol under the Fiat-Shamir transform, revealing a new intermediate security phenomenon that appears intrinsic to approximation. Prior work on verifiable computing for numerical tasks typically verifies that a prover exactly executed a computation that only approximates the desired function. In contrast, our protocols treat approximation as a first-class citizen: the verifier's checks are relaxed to accept prover messages that are only approximately consistent with the claimed result. This establishes the first black-box feasibility result for approximate arithmetic proof systems: the protocol compiler is independent of how arithmetic operations are implemented, requiring only that they satisfy error bounds. This opens a path to verifying approximate computations while sidestepping much of the prover overhead imposed by existing techniques that require encoding real-valued data into finite field arithmetic.
Last updated:  2025-11-25
TACITA: Threshold Aggregation without Client Interaction
Varun Madathil, Arthur Lazzaretti, Zeyu Liu, and Charalampos Papamanthou
Secure aggregation enables a central server to compute the sum of client inputs without learning any individual input, even in the presence of dropouts or partial participation. This primitive is fundamental to privacy-preserving applications such as federated learning, where clients collaboratively train models without revealing raw data. We present a new secure aggregation protocol, TACITA, in the single-server setting that satisfies four critical properties simultaneously: (1) one-shot communication from clients with no per-instance setup, (2) input-soundness, i.e. the server cannot manipulate the ciphertexts, (3) constant-size communication per client, independent of the number of participants per-instance, and (4) robustness to client dropouts Previous works on secure aggregation - Willow and OPA (CRYPTO'25) that achieve one-shot communication do not provide input soundness, and allow the server to manipulate the aggregation. They consequently do not achieve full privacy and only achieve Differential Privacy guarantees at best. We achieve full privacy at the cost of assuming a PKI. Specifically, TACITA relies on a novel cryptographic primitive we introduce and realize: succinct multi-key linearly homomorphic threshold signatures (MKLHTS), which enables verifiable aggregation of client-signed inputs with constant-size signatures. To encrypt client inputs, we adapt the Silent Threshold Encryption (STE) scheme of Garg et al. (CRYPTO 2024) to support ciphertext-specific decryption and additive homomorphism. We formally prove security in the Universal Composability framework and demonstrate practicality through an open-source proof-of-concept implementation, showing our protocol achieves scalability without sacrificing efficiency or requiring new trust assumptions.
Last updated:  2025-11-25
Efficient Aggregate Anonymous Credentials for Decentralized Identity
Rebekah Mercer, Kaoutar El Khiyaoui, Angelo De Caro, and Elli Androulaki
Anonymous credential schemes allow users to prove claims about themselves in a privacy-preserving manner. Put simply, a user can prove that she has an attribute value (e.g., she is a citizen) without revealing any other information about herself. Traditional schemes (including those with multiple credential issuers) operate under the assumption that each recognized issuer produces credentials for all user attributes. This assumption, however, is not practical: in the real world, no such issuer exists; an identity provider today issues a user credentials for a subset of user attributes, not all. A promising approach to support this setting is aggregate anonymous credentials, which allow a user to receive credentials from several issuers such that she can aggregate them and prove various claims about her attribute values in one shot. In this paper, we first introduce what we call aggregate tag-based signatures and describe an efficient instantiation. We then leverage the latter together with structure-preserving signatures and signatures of knowledge to construct an efficient aggregate anonymous credential scheme. We finally, formally evaluate the security of the proposed schemes and run benchmarks to showcase the practicality of the resulting scheme and its relevance for decentralized identity applications.
Last updated:  2025-11-25
SoK: Secure Computation over Secret Shares
Tamir Tassa and Arthur Zamarin
Secure multiparty computation (MPC) enables mutually distrustful parties to jointly compute functions over private data without revealing their inputs. A central paradigm in MPC is the secret-sharing-based model, where secret sharing underpins the efficient realization of arithmetic, comparison, numerical, and Boolean operations on shares of private inputs. In this paper, we systematize protocols for these operations, with particular attention to two foundational contributions \cite{ChidaGHIKLN18,NO07} that devised secure multiplication and comparison. Our survey provides a unified, self-contained exposition that highlights the composability, performance trade-offs, and implementation choices of these protocols. We further demonstrate how they support practical privacy-preserving systems, including recommender systems, distributed optimization platforms, and e-voting infrastructures. By clarifying the protocol landscape and connecting it to deployed and emerging applications, we identify concrete avenues for improving efficiency, scalability, and integration into real-world MPC frameworks. Our goal is to bridge theory and practice, equipping both researchers and practitioners with a deeper understanding of secret-sharing-based MPC as a foundation for privacy technologies.
Last updated:  2025-11-25
Hardness of Problems with Hints in Code-Based Cryptography and Applications to Traitor Tracing
Thomas Debris-Alazard, Victor Dyseryn, and Duong Hieu Phan
Code-based cryptography has reached maturity for standard primitives such as encryption and digital signatures. However, when it comes to advanced encryption functionalities, particularly in multi-user settings involving collusions among users holding different secret keys, there is still no foundational framework for code-based schemes. In this work, we address this gap by introducing a multi-receiver encryption scheme with tracing capability based on coding assumptions. This primitive often serves as a foundation for more advanced constructions such as attribute-based or functional encryption. To achieve this goal, we propose a kernel sampling technique that enables the sampling of secret keys under a common public key, thereby realizing multi-receiver encryption. The resulting construction is as efficient as the underlying public-key encryption scheme, namely Alekhnovich's scheme from FOCS'03. We also introduce new hardness assumptions on problems with hints. These assumptions extend classical code-based problems to handle collusion among dishonest users in the multi-user setting. In particular, we define the $\ell$-Decoding Problem ($\ell$-DP), the code-based analogue of the $k$-LWE problem in lattices, and provide a reduction from the standard Decoding Problem (DP) to $\ell$-DP. We further introduce structural variants relying on Moderate Density Parity Check (MDPC) codes that we call $\ell$-MDPC-DP. Based on $\ell$-MDPC-DP, we design the first code-based traitor-tracing encryption scheme. Interestingly, the security of our scheme relies on $\ell$-MDPC-DP and a slight variation called $(\ell,\ell')$-OM-MDPC-DP, the latter of which we prove to be as hard as $\ell$-DP via a polynomial reduction, therefore reducing the security of our scheme to only $\ell$-MDPC-DP. Although no formal reduction from $\ell$-DP to $\ell$-MDPC-DP is given, the definition of $\ell$-MDPC-DP is a natural variant of $\ell$-DP adapted to the multi-user code-based setting. Furthermore, we also provide evidence of $\ell$-MDPC-DP hardness by presenting detailed cryptanalytic arguments.
Last updated:  2025-11-25
Multivariate Signatures with Polynomial Factorization
Irene Di Muzio, Martin Feussner, and Igor Semaev
We propose a new multivariate digital signature scheme whose central mapping arises from the product of two one-variate polynomials over a finite field $\mathbb{F}_q$. The resulting quadratic transformation is efficiently invertible through polynomial factorization, defining the trapdoor mechanism. The public key comprises $m$ bilinear forms in $2n$ variables, obtained by masking the central map with secret linear transformations. A reference implementation targeting NIST security level 1 achieves a 24-byte signature and a 12-kilobyte public key. This signature size is among the smallest ever proposed for level 1 security and the scheme achieves verification efficiency comparable to the fastest existing designs. Security relies on the hardness of solving certain bilinear systems, for which it seems no efficient classical or quantum algorithms are known.
Last updated:  2025-11-25
Low-Latency Fully Homomorphic Arithmetic Using Parallel Prefix Group Circuit with Primitive Gate Bootstrapping
Dohyuk Kim, Sin Kim, Seunghwan Lee, and Dong-Joon Shin
Fully Homomorphic Encryption over the Torus (TFHE) is a fully homomorphic encryption scheme that efficiently supports Boolean logic gates by performing gate operations and refreshing ciphertext noise with single gate bootstrapping. However, its operation is limited to simple two-input gates such as AND, OR, XOR and NAND, requiring deep circuits and multiple bootstrapping steps to support more complex arithmetic. In this paper, we propose Primitive Gate Bootstrapping, a new algebraic framework that significantly expands the class of Boolean functions evaluable with a single bootstrapping. By formalizing bootstrappable functions as compositions of linear maps and negacyclic functions, called the Blind-Rotational Function Family (BRFF), we define a subclass, the Primitive Gate Family (PGF). This family includes multi-input hybrid gates such as l-input XOR, 3-input Majority, and AND-XOR, which can all be realized with a single bootstrapping. Building on PGF, we further design a general circuit framework called the Parallel Prefix Group Circuit (PPGC) for efficiently implementing arithmetic and logical operations. PPGC enable n-bit addition, subtraction, comparison, equality, select, minimum/maximum, absolute, and negation with logarithmic depth O(log n) and significantly reduced latency compared to TFHE. In particular, our optimized implementations of addition and subtraction achieve a 1.92× speedup over TFHE, while the size of the blind rotation key was reduced by approximately 40%. In addition to the two-input addition, we also introduce an efficient technique for multi-input addition, which is particularly useful in applications such as encrypted matrix multiplication. Therefore, it is clear that the PGF-based constructions offer a practically scalable and efficient foundation for depth-sensitive homomorphic computations
Last updated:  2025-11-25
HRA-Secure Puncturable Attribute-Based Proxy Re-Encryption from Lattices for Secure Cloud Sharing
Tianqiao Zhang, Mingming Jiang, Fucai Luo, Yuyan Guo, and Jinqiu Hou
With the rapid advancement of cloud computing technology, outsourcing massive datasets to cloud servers has become a prominent trend, making secure and efficient data sharing mechanisms a critical requirement. Attribute-based proxy re-encryption (ABPRE) has emerged as an ideal solution due to its support for fine-grained, one-to-many access control and robust ciphertext transformation capabilities. However, existing ABPRE schemes still exhibit shortcomings in addressing forward security issues caused by long-term private key leakage, threats from quantum computer attacks, and vulnerabilities to honest re-encryption attacks (HRA). To simultaneously resolve these challenges, this paper introduces a novel cryptographic primitive termed puncturable attribute-based proxy re-encryption with switchable tags (PABPRE-ST), constructing a secure cloud data sharing scheme that supports fine-grained revocation. By integrating puncturable encryption (PE) mechanisms into the ABPRE framework, the scheme achieves fine-grained ciphertext revocation based on tags. In PABPRE-ST, data owners embed tags into ciphertexts, enabling data users to puncture specific tags and thereby revoke access to corresponding ciphertexts at a granular level. Furthermore, the scheme allows delegators to switch ciphertext tags, enhancing sharing flexibility. We formalize the security definitions for the proposed puncturable attribute-based proxy re-encryption scheme and prove its security under the learning with errors (LWE) assumption, which is widely believed to be resistant to quantum computer attacks. Security analysis demonstrates that the proposed scheme achieves HRA security in the standard model.
Last updated:  2025-11-25
On the BUFF Security of ECDSA with Key Recovery
Keita Emura
In the usual syntax of digital signatures, the verification algorithm takes a verification key in addition to a signature and a message, whereas in ECDSA with key recovery, which is used in Ethereum, no verification key is input to the verification algorithm. Instead, a verification key is recovered from a signature and a message. In this paper, we explore BUFF security of ECDSA with key recovery (KR-ECDSA), where BUFF stands for Beyond UnForgeability Features (Cremers et al., IEEE S&P 2021). As a result, we show that KR-ECDSA provides BUFF security, except weak non-resignability (wNR). It is particularly noteworthy that the KR-ECDSA verification algorithm takes an Ethereum address addr as input. This address is defined as the rightmost 160 bits of the Keccak-256 hash of the corresponding ECDSA verification key. Crucially, the algorithm verifies that the hash of the recovered verification key matches addr. Our security analysis shows that the procedure, the process of checking whether the hash value of the recovered verification key is equal to the address, is mandatory to provide BUFF security. We also discuss whether wNR is mandatory in Ethereum or not. To clarify which part is mandatory to provide BUFF security in KR-ECDSA, we show that the original ECDSA does not provide any BUFF security. As a by-product of the analysis, we show that one of our BUFF attacks also works against Aumayr et al.'s ECDSA-based adaptor signature scheme (ASIACRYPT 2021) and Qin et al.'s blind adaptor signature scheme (IEEE S\&P 2023), which is based on Aumayr et al.'s scheme. We emphasize that the attack is positioned outside of their security models.
Last updated:  2025-11-24
VeRSA: Verifiable Registries with Efficient Client Audits from RSA Authenticated Dictionaries
Nirvan Tyagi, Ben Fisch, Andrew Zitek-Estrada, Joseph Bonneau, and Stefano Tessaro
Verifiable registries allow clients to securely access a key-value mapping maintained by an untrusted server. Registries must be audited to ensure global invariants are preserved, which, in turn, allows for efficient monitoring of individual registry entries by their owners. To this end, existing proposals either assume trusted third-party auditors or rely on incrementally verifiable computation (IVC) via expensive recursive SNARKs to make registries client-auditable. In this work, we give new client-auditable verifiable registries with throughputs up to $100\times$ greater than baseline IVC solutions. Our approach relies on an authenticated dictionary based on RSA accumulators for which we develop a new constant-size invariant proof. We use this as a replacement for Merkle trees to optimize the baseline IVC approach, but also provide a novel construction which dispenses with SNARKs entirely. This latter solution adopts a new checkpointing method to ensure client view consistency.
Last updated:  2025-11-24
An efficient quantum algorithm for computing $S$-units and its applications
Jean-François Biasse and Fang Song
In this paper, we provide details on the proofs of the quantum polynomial time algorithm of Biasse and Song (SODA 16) for computing the $S$-unit group of a number field. This algorithm directly implies polynomial time methods to calculate class groups, $S$-class groups, relative class group and unit group, ray class groups, solve the principal ideal problem, solve certain norm equations, and decompose ideal classes in the ideal class group. Additionally, combined with a result of Cramer, Ducas, Peikert and Regev (Eurocrypt 2016), the resolution of the principal ideal problem allows one to find short generators of a principal ideal. Likewise, methods due to Cramer, Ducas and Wesolowski (Eurocrypt 2017) use the resolution of the principal ideal problem and the decomposition of ideal classes to find so-called ``mildly short vectors'' in ideal lattices of cyclotomic fields.
Last updated:  2025-11-24
Persistent BitTorrent Trackers
François-Xavier Wicht, Zhengwei Tong, Shunfan Zhou, Hang Yin, and Aviv Yaish
Private BitTorrent trackers enforce upload-to-download ratios to prevent free-riding, but suffer from three critical weaknesses: reputation cannot move between trackers, centralized servers create single points of failure, and upload statistics are self-reported and unverifiable. When a tracker shuts down (whether by operator choice, technical failure, or legal action) users lose their contribution history and cannot prove their standing to new communities. We address these problems by storing reputation in smart contracts and replacing self-reports with cryptographic attestations. Receiving peers sign receipts for transferred pieces, which the tracker aggregates and verifies before updating on-chain reputation. Trackers run in Trusted Execution Environments (TEEs) to guarantee correct aggregation and prevent manipulation of state. If a tracker is unavailable, peers use an authenticated Distributed Hash Table (DHT) for discovery: the on-chain reputation acts as a Public Key Infrastructure (PKI), so peers can verify each other and maintain access control without the tracker. This design persists reputation across tracker failures and makes it portable to new instances through single-hop migration in factory-deployed contracts. We formalize the security requirements, prove correctness under standard cryptographic assumptions, and evaluate a prototype on Intel TDX. Measurements show that transfer receipts adds less than 6\% overhead with typical piece sizes, and signature aggregation speeds up verification by $2.5\times$.
Last updated:  2025-11-24
Leveled Homomorphic Encryption Schemes for Homomorphic Encryption Standard
Shuhong Gao and Kyle Yates
Homomorphic encryption allows for computations on encrypted data without exposing the underlying plaintext, enabling secure and private data processing in various applications such as cloud computing and machine learning. This paper presents a comprehensive mathematical foundation for three prominent homomorphic encryption schemes: Brakerski-Gentry-Vaikuntanathan (BGV), Brakerski-Fan-Vercauteren (BFV), and Cheon-Kim-Kim-Song (CKKS), all based on the Ring Learning with Errors (RLWE) problem. We align our discussion with the functionalities proposed in the recent homomorphic encryption standard, providing detailed algorithms and correctness proofs for each scheme. Additionally, we propose improvements to the current schemes focusing on noise management and optimization of public key encryption and leveled homomorphic computation. Our modifications ensure that the noise bound remains within a fixed function for all levels of computation, guaranteeing correct decryption and maintaining efficiency comparable to existing methods. The proposed enhancements reduce ciphertext expansion and storage requirements, making these schemes more practical for real-world applications.
Last updated:  2025-11-24
Efficient Constant-Size Linkable Ring Signatures for Ad-Hoc Rings via Pairing-Based Set Membership Arguments
Min Xie, Zhengzhou Tu, Man Ho Au, Junbin Fang, Xuan Wang, and Zoe Lin Jiang
Linkable Ring Signatures (LRS) allow users to anonymously sign messages on behalf of ad-hoc rings, while ensuring that multiple signatures from the same user can be linked. This feature makes LRS widely used in privacy-preserving applications like e-voting and e-cash. To scale to systems with large user groups, efficient schemes with short signatures and fast verification are essential. Recent works, such as DualDory (ESORICS’22) and LLRing (ESORICS’24), improve verification efficiency through offline precomputations but rely on static rings, limiting their applicability in ad-hoc ring scenarios. Similarly, constant-size ring signature schemes based on accumulators face the same limitation. In this paper, we propose a framework for constructing constant-size LRS suitable for large ad-hoc rings. We introduce a novel pairing-based Set Membership Argument (SMA) with a proof size of only three group elements. By leveraging KZG polynomial commitments, we optimize the verification to require only constant group exponentiations and pairings, as well as linear field multiplications. Utilizing the SMA, our framework achieves constant-size signatures with verification dominated by linear field operations, outperforming existing schemes that require linear group exponentiations in ad-hoc ring settings. Moreover, it exhibits strong scalability: (i) compatibility with any PKI-based cryptosystem and (ii) scoped linkability, enabling flexible definitions of linking scope. We instantiate our framework using a discrete logarithm public key structure. On the $BN254$ curve, our signature size is fixed at 687 bytes, which to our best knowledge is the shortest LRS for ring sizes larger than 32. For a ring size of 1024, our verification cost is only 10.4 ms, achieving 48.6×, 2.6×–467×, 7.9×–13.2×, and 2.2×–102.5× improvements over Omniring (CCS’19), DualDory (with and without precomputation), LLRing-DL (with and without precomputation), and LLRing-P (with and without precomputation), respectively. Moreover, this performance gap continues to grow as the ring size increases.
Last updated:  2025-11-24
Weak Tweak-Key Analysis Of Blink Via Superbox
Shiyao Chen, Jian Guo, and Tianyu Zhang
This work presents the first third-party cryptanalysis of Blink, a recent tweakable block cipher built on the Three-Hash Framework (THF) with an $\alpha$-reflection structure and a long-key design. We build our analysis on the idea of Superbox, in view of its natural support of local clustering and easy integration with automatic search tools. Thus, we first develop a deliberately lightweight theoretical model that captures the value correlations inside the Superbox of Blink when the value spaces are affine, the weak-key conditions, fixed-key probabilities and the local clustering behaviors. Our theory is only intended as a concrete, easy-to-follow specialization of existing generic frameworks adapted precisely to the structure of Blink. We then build a simple pattern-based automatic search model for weak tweak-key differentials. As a result, for Blink-64 with 448-bit key, we obtain a 10-round distinguisher for up to $2^{442}$ weak keys with probability $2^{-50.42}$; For Blink-128 with 1024-bit key, we obtain a 10-round distinguisher for up to $2^{1010}$ weak keys with probability $2^{-68.83}$, which can be extended to a 12-round multiple differential distinguisher with probability $2^{-96.83}$. The all-zero tweak is a convenient choice to maximize the weak key space. Based on the weak tweak-key distinguisher, we further mount a 13-round key recovery attack on Blink-64, recovering the full 448-bit master key with time complexity $2^{112.15}$ and $2^{56}$ chosen plaintexts. All these distinguishers and key recovery attacks work within the designers' claimed data and time bounds, and our analysis remains consistent with the security of the full-round design of Blink, while offering additional insight into the edge-case behaviors arising from the tweak-key interaction in Blink, and could be potentially informative for future refinements of tweakable block cipher constructions.
Last updated:  2025-11-24
Introducing the ALF family: AES-NI-based length- and format-preserving encryption
Dachao Wang, Alexander Maximov, and Thomas Johansson
This paper introduces the ALF cipher family, designed for format-preserving encryption. The key strategy is to leverage AES-NI instructions to achieve a high software performance while also providing 128-bit security. As the input size may vary a lot between different cases, we present a family of ciphers, where different instances cover different domain sizes. While the included designs differ, the common theme is their use of AES-NI instructions in the designs. A central part of the paper is the design of an AES-related bit-oriented block cipher with varying block size in the interval 16-127 bits. The design is byte-oriented, but allows appending 0-7 bits to a byte-oriented input. We introduce an approach for efficient implementation using AES-NI, even though the block size is not 128 bits. The block cipher family is turned into a format-preserving encryption by a new cycle-sliding transformation, improving slightly on traditional cycle-walking. Performance evaluation shows improvements in speed of several orders of magnitude compared to the standardised algorithm FF1 and the previous FF3.
Last updated:  2025-11-24
Updatable Private Set Intersection and Beyond: Efficient Constructions via Circuit Private Set Intersection
Ferran Alborch, Tom Chauvier, Antonio Faonio, Alexandre Fontaine, Ferhat Karakoç, Alptekin Küpçü, Camille Malek, and Melek Önen
Private Set Intersection (PSI) has been widely studied, deployed, and demonstrated through various real-life use cases such as mobile private contact discovery, privacy-preserving contact tracing, etc. Nevertheless, the majority of existing solutions typically assume that the underlying datasets are static and require a fresh execution of PSI at each time the datasets are updated over time. In this work, similar to a recent solution by Badrinaryanan et. al' (ASIACRYPT 2024), we investigate the problem of designing efficient and secure updatable PSIs in the honest-but-curious model by adopting the approach of executing a small number of PSIs over smaller sets instead of one PSI over the entire, updated sets. We first identify that existing constructions suffer from two privacy leakages and further propose to mitigate them thanks to the use of circuit PSIs, which are variants of PSI protocols that instead of outputting the resulting intersection, output the secret shares of the intersection and nothing more, combined with secure shuffling when needed. We construct a generic framework for PSI over updated sets which can use any circuit-PSI. Additionally, we show that this framework can easily be extended to a protocol that outputs the cardinality of the intersection instead of the intersection, itself. Finally, we provide an in-depth discussion on the feasibility of circuit PSI over updated sets, with the main challenges to overcome and solutions for some particular cases. Our solutions are implemented in Rust and their performance is compared with the state of the art, achieving an improvement of $11\times$ to $40\times$ in updatable PSI and $14\times$ to $107\times$ in updatable cardinality PSI in computation time. The proposed framework is also demonstrated through a real-life use case, namely, a spam detection filter.
Last updated:  2025-11-24
Zero-Knowledge Protocols with PVC Security: Striking the Balance between Security and Efficiency
Yi Liu, Yipeng Song, Anjia Yang, and Junzuo Lai
Zero-knowledge protocols allow a prover to prove possession of a witness for an NP-statement without revealing any information about the witness itself. This kind of protocol has found extensive applications in various fields, including secure computation and blockchain. However, in certain scenarios (e.g. when the statements are complicated), existing zero-knowledge protocols may not be well-suited due to their limited applicability or high computational overhead. We address these limitations by incorporating the notion of publicly verifiable covert (PVC) security into zero-knowledge protocols. PVC security, recently emerging from secure computation, effectively balances security and efficiency in practical scenarios. With PVC security, while a malicious party may attempt to cheat, such cheating will be detected and become publicly verifiable with a significant probability (called deterrence factor, e.g. ${>}90\%$). This notion is well-suited for practical scenarios involving reputation-conscious parties (e.g. companies) and offers substantial efficiency improvements. In this paper, we present the first definition of zero-knowledge protocols with PVC security. We then propose a generic transformation to convert Sigma protocols with $1$-bit challenge, a kind of protocol widely used for zero-knowledge, into efficient zero-knowledge protocols with PVC security. By applying our transformation, we can substantially improve the efficiency of existing protocols for reputation-sensitive parties. For instance, applying the transformation to achieve a deterrence factor of $93.75\%$ incurs a cost of only around $20\%$ compared to the original protocol. Therefore, our results contribute to significant advancements in practical zero-knowledge protocols.
Last updated:  2025-11-24
Derivative-Free Richelot Isogenies via Subresultants: Algebraic Equivalence and Certified Guarded Computation
Hung T. Dang
We present a derivative-free Richelot (2,2)-isogeny formulation using first subresultants and a canonical quadratic lift. Over odd characteristic, we prove its algebraic equivalence in Fp[x] to the classical Wronskian under natural normalization. Leveraging this, we introduce the Guarded Subresultant Route (GSR): a deterministic evaluator with constant-size algebraic guards, a lightweight post-check, and at most one affine retry. It returns a certified triple (U, V, W) or rejects non-admissible inputs, eliminating differentiation while enforcing admissibility and auditable control flow. Prototypes show the core is 1.46–1.70 times faster than the classical Wronskian across varied primes, with GSR adding about 5–10 microseconds of constant overhead. The backend-agnostic design suits batched and hierarchical genus-2 isogeny pipelines for reproducible computation.
Last updated:  2025-11-24
On Equivalence of the Butterfly Structure
Chin Hei Chan
The butterfly structures were introduced by Perrin et al. as a generalization of the Dillon's APN permutation operated in 6 bits. It was proved by several authors independently that such butterflies are differentially 4-uniform and have best known nonlinearity among functions over $\mathbb{F}_{2^{2k}}$. In [LLHQ21, LHXZ22] authors provided sufficient coefficient conditions such that the closed butterfly is a permutation and they further prove that such functions have boomerang uniformity 4 and are linear equivalent to Gold functions. In this paper, we adopt techniques from [DZ23] and [GK] to classify closed butterfly structures satisfying more general conditions in terms of EA-equivalence and CCZ-equivalence.
Last updated:  2025-11-23
New Post-Quantum IBE leveraging maturity, efficiency and security of standard schemes
Julien CAM
Many Identity-Based Encryption (IBE) schemes rely on the hardness of the Discrete Logarithm Problem, making them vulnerable to quantum attacks like Shor's algorithm. In recent years, lattice-based cryptography has emerged as a source of Post-Quantum cryptosystems, for example with Kyber, Dilithium and Falcon chosen by NIST to be standardized as ML-KEM, ML-DSA and FN-DSA. In the meantime, some IBEs have also been proposed over lattices. However, they can still be considered as interesting theoretical constructions, the community's attention having been more on the NIST competition than on optimizing IBEs, assessing their security, and protecting them against physical attacks. So, in this paper, we build a new IBE scheme from the highly studied ML-KEM, ML-DSA and ModFalcon. More precisely, we leverage the Module-NTRU trapdoor from ModFalcon to enable extraction of secret keys, we use the encryption and decryption algorithms from ML-KEM, and the modular arithmetic and Number-Theoretic Transform from ML-DSA. Therefore, being able to reuse some of their code, our scheme is easy to implement, and can benefit from existing and future and side-channel protections. In this paper, we also prove the IND-sID-CPA-security of our scheme under the Ring-LWE and Module-NTRU assumptions, and we precisely describe the choice of appropriate parameters. As a work that can be of independent interest, we also provide an efficient estimator for the decryption failure probability of a LWE-based scheme, which allows us to concretely check the negligible failure probability of our scheme, at a standard security level.
Last updated:  2025-11-23
Revisiting the Generalized Birthday Problem and Equihash: Single or K Lists?
Lili Tang, Yao Sun, and Xiaorui Gong
The Generalized Birthday Problem ($\textsf{GBP}$), which seeks $k$ hash values from $k$ lists whose XOR is zero, is a fundamental problem across multiple cryptographic domains. While the k-list $\textsf{GBP}$ has been extensively studied, many schemes including $\textsf{Equihash}$ (NDSS'16) utilize a single-list variant (selecting hash values from a single list) without clear theoretical grounding. Our work first reveals that k-list $\textsf{GBP}$ implicitly exhibits a regularity property, a block-wise structure that has been thoroughly studied by Esser and Santini (Crypto'24) in the context of Syndrome Decoding Problem. Such structured regularity can often be leveraged to design efficient protocols and enable new functionalities. In this work, we revisit these two long-conflated $\textsf{GBP}$s and initiate a systematic study of the regularity in $\textsf{GBP}$ realm. $\textbf{Complexity.}$ In the worst-case setting, we develop a novel ISD-based framework for $\textsf{GBP}$. When $k/n > 0.188$ and $k/n > 0.11$ for the regular and non-regular cases, respectively, the proposed algorithms surpass the worst-case complexity of $2^{n/2}$. Our results disprove the average-case to worst-case $k$-$\textsf{XOR}$ conjecture when $k$ is non-constant. In the average-case regime, we fill in theoretical gaps in solving the single-list $\textsf{GBP}$ and show that the regular variant exhibit a $\sqrt{2}$-factor difference in the exponent, which extends naturally to the $k$-$\textsf{SUM}$ problem and offer new insights on its complexity. $\textbf{Implications for Cryptography.}$ We analyze the impact of regularity on incremental hash and propose a new collision attack against the ID-based incremental hash (Eurocrypt'97). Our attack achieves an asymptotic time complexity of $\mathcal{O}(\sqrt{n} \cdot 2^{\sqrt{2n}})$, significantly improving upon the previous Wagner's bound of $\mathcal{O}(2^{\sqrt{4n}})$ (Crypto'02). Applying our attack to $\textsf{iSHAKE256}$, we reduce its security lower bound from \( 2^{256} \) to \( 2^{189} \). In the realm of $\textsf{Equihash}$, the index-pointer technique has significantly weakened its ASIC-resistance. To address this, we propose $\textsf{Requihash}$, a PoW with enhanced ASIC-resistance and smaller solution size, rigorously aligned with the regular $k$-list $\textsf{GBP}$.
Last updated:  2025-11-23
Differential cryptanalysis of An optimized novel lightweight block cipher for image encryption
Khaled Hosseini and Sadegh Sadeghi
This paper presents a security analysis of the ARX-based lightweight block cipher proposed by Mohanapriya and Nithish (Sci Rep 15, 36060 (2025)) for image encryption in IoT environments. The cipher employs a 64-bit key and a 64-bit block size, relying on Addition, Rotation, and XOR (ARX) operations. Our analysis demonstrates that the full-round version of this cipher is vulnerable to differential cryptanalysis. In fact, the cipher can be distinguished from a random permutation using $2^{41}$ chosen plaintexts. Consequently, the designers' security claims are not fully supported.
Last updated:  2025-11-23
New Memory Optimizations of Wagner's Algorithms via List Item Reduction
Lili Tang, Rui Ding, Yao Sun, and Xiaorui Gong
The Generalized Birthday Problem ($\textsf{GBP}$) serves as a cornerstone for a broad spectrum of cryptanalytic research. The classical solution, Wagner’s $k$-tree algorithm (CRYPTO’02), is characterized by inherently high memory complexity. Subsequently, Biryukov and Khovratovich (NDSS’16) introduced $\textsf{Equihash}$, a memory-hard proof-of-work scheme constructed upon a single-list variant of Wagner’s algorithm. Due to its robust resistance to ASIC mining, $\textsf{Equihash}$ has emerged as one of the most widely adopted proof-of-work schemes in blockchain. However, memory optimization for Wagner-style algorithms remains a persistent challenge in cryptographic research. Conventional approaches primarily focus on reducing list size to lower memory consumption, yet this strategy typically incurs a prohibitive time penalty. For instance, halving the peak memory usage of the $\textsf{Equihash}(200, 9)$ mining algorithm increases its theoretical time complexity by a factor of $2^{24.6}$. In this work, we investigate a novel optimization vector: List Item Reduction (LIR), which facilitates practical and fine-grained memory-time trade-offs. We systematize existing LIR techniques and propose novel optimization methods integrated into a new hybrid framework. While our approach does not improve asymptotic memory complexity, it achieves a near-linear trade-off in practice, offering substantial benefits for the concrete design of efficient Wagner-style algorithms. Specifically, our techniques reduce peak memory usage by nearly 50% (from $2nN$ to $nN$ bits) across all $\textsf{Equihash}$ parameters, with only an approximate twofold time penalty. Furthermore, we present concrete implementations, including the first realization of an in-place $\textsf{merge}$. Building on these results, we propose an ASIC-friendly framework leveraging an external-memory caching mechanism.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.