What a lovely hat

Is it made out of tin foil?

Papers updated in last 7 days (74 results)

Last updated:  2025-12-23
PIsignHD: A New Structure for the SQIsign Family with Flexible Applicability
Kaizhan Lin, Weize Wang, Chang-An Zhao, and Yunlei Zhao
In this paper, we propose a new structure for the SQIsign family: Pentagon Isogeny-based Signature in High Dimension (referred to as PIsignHD). The new structure separates the hash of the commitment and that of the message by employing two cryptographic hash functions. This feature is desirable in reality, particularly for applications based on mobile low-power devices or for those deployed interactively over the Internet or in the cloud computing setting. This structure can be generally applied to all SQIsign variants. In this work, we focus on the instance based on SQIsignHD. Compared with SQIsignHD, PIsignHD has the same signature size (even smaller for some application scenarios). For the NIST-I security level, the signature size of PIsignHD can be reduced to 519 bits, while the SQIsignHD signature takes 870 bits. Additionally, PIsignHD has an efficient online signing process and enjoys much desirable application flexibility. In our experiments, the online signing process of PIsignHD runs in 4 ms.
Last updated:  2025-12-23
Suwako: A Logarithmic-Depth Modular Reduction for Arbitrary Trinomials over $\mathbb{F}_{2^m}$ without Pre-computation
Junyu Zhou, Jing Wang, Hao Ren, Si Gao, and Xiao Lan
Modular reduction over binary extension fields $\mathbb{F}_{2^m}$ is a fundamental operation in cryptographic implementations, including GCM and Elliptic Curve Cryptography. Traditional reduction algorithms (e.g., linear LFSR-based methods) are highly sensitive to the algebraic structure of the defining polynomial. This sensitivity is especially acute for trinomials $P(x) = x^m + x^t + 1$, where cryptographic standards have historically mandated the use of ``friendly'' polynomials (with small $t$) to avoid the linear performance degradation associated with ``random'' or ``unfriendly'' parameters. In this paper, we challenge this constraint by introducing Suwako, a novel reduction algorithm. By exploiting the self-similar algebraic structure of the reduction map, Suwako transforms the reduction process from a serial iterative chain (dependent on the degree gap $\Delta = m-t$) into a logarithmic-depth binary-doubling structure. We theoretically prove that Suwako achieves $O(\log m)$ folding depth for arbitrary trinomials, regardless of the position of the middle term $t$. Furthermore, unlike window-based or Montgomery/Barrett reduction methods, Suwako requires no pre-computation, making it optimal for dynamic environments.
Last updated:  2025-12-22
FRIVail: A Data Availability Scheme based on FRI Binius
Rachit Anand Srivastava
Data Availability Sampling (DAS) has emerged as a key scalability technique for blockchain systems, enabling light clients to verify that block data have been fully published without downloading them in their entirety. We introduce FRIVail, a new DAS construction built on top of the FRI-Binius polynomial commitment scheme, designed for datasets composed of many independent single-row payloads that together form a block’s data blob. FRIVail exploits the intrinsic Reed–Solomon structure of FRI, wherein each commitment naturally encodes a codeword that light clients can sample directly. Each row of the blob is assigned an independent FRI proof. These row-level proofs are then combined into a global availability certificate using one of three aggregation strategies. The first constructs a succinct zero-knowledge proof attesting to the correct verification of all row-level FRI proofs, yielding a compact ZK proof of proofs that enables succinct global verification while preserving row independence. The second is a fully post-quantum construction that recursively applies FRI-Binius to build a proof of proofs. In this setting, global verification relies on FRI proximity checks, but reconstruction of the aggregated proof polynomial is required to recover embedded row-level information. The third is a hybrid aggregation based on KZG polynomial commitments, where the aggregated polynomial admits direct algebraic openings but relies on pairing-based assumptions and a trusted setup, and is therefore not post-quantum. In all variants, light clients verify availability via a small number of local opening checks against the header commitment, without downloading entire rows or the full blob. We formalize DAS security in this multi-row, multi-proof setting and show that FRIVail achieves sublinear verification complexity, robustness against adversarial availability equivocation at the row level, and resistance to correlated sampling attacks. FRIVail provides a modular foundation for next-generation blockchain data availability protocols, supporting zero-knowledge-based, fully post-quantum, and hybrid cryptographic deployments.
Last updated:  2025-12-22
On Delegation of Verifiable Presentations from mdoc and BBS Credentials
Uncategorized
Andrea Flamini, Andrea Gangemi, Enrico Guglielmino, and Vincenzo Orabona
Show abstract
Uncategorized
The interest in verifiable credential systems has gained traction as eIDAS 2.0 Regulation has been published. This regulation instructs EU member states to provide their citizens with digital identity wallets (EUDI Wallet) that must store the credentials and enable privacy-preserving presentation of identity information to relying parties. This new digital identity system requires defining new protocols and procedures to perform tasks involving the disclosure of identity information. One of such procedures is the delegation of attestation, as is reported in the EUDI Wallet Reference Implementation Roadmap. In this work, we address the problem of constructing secure processes for the delegation of verifiable presentations derived from both verifiable and anonymous credentials. Our goal is to enable a credential holder (the delegator) to securely delegate another party (the delegatee) to present a credential on their behalf. We introduce the notion of a verifiable presentation delegation scheme, formalizing the core algorithms, namely delegation issuance, delegated presentation, and presentation verification, and defining the relevant security properties that such a scheme should satisfy: the correctness, the unforgeability, and, when the scheme is built on top of anonymous credentials, even the unlinkability. We present two concrete instantiations of delegation schemes: the first is built on top of mdoc verifiable credentials, the credential format currently supported by the EUDI Wallet Architecture and Reference Framework (EUDI ARF), while the second is built on top of BBS anonymous credentials. Finally, we discuss and analyze the security of our constructions in terms of the security properties we have introduced.
Last updated:  2025-12-22
A New Approach to Large Party Beaver-Style MPC with Small Computational Overhead
Aayush Jain, Huijia Lin, and Nuozhou Sun
Secure multi-party computation (MPC) enables $N$ parties to jointly evaluate any function over their private inputs while preserving confidentiality. While decades of research have produced concretely efficient protocols for small to moderate numbers of participants, scaling MPC to thousands of parties remains a central challenge. Most of the existing approaches either incur per-party costs linear in $N$, due to pairwise computations, or rely on heavy cryptographic tools such as homomorphic encryption, which introduces prohibitive overheads when evaluating Boolean circuits. In this work, we introduce a new lightweight approach to designing semi-honest MPC protocols with per-party, per-gate computation and communication costs that are independent of $N$. Our construction leverages the Sparse Learning Parity with Noise (Sparse LPN) assumption in the random oracle model to achieve per-gate costs of $O(k^2 \cdot c(\lambda))$ computation and $O(c(\lambda))$ communication, where $k$ is the sparsity parameter for the Sparse LPN assumption and $c(\lambda)$ is an arbitrarily small super-constant in the security parameter $\lambda$. Assuming Sparse LPN remains hard for any super-constant sparsity, this yields the first semi-honest MPC protocol in the dishonest-majority setting with per-party per-gate costs bounded by an arbitrarily small super-constant overhead in $\lambda$. Structurally, our MPC instantiates a Beaver style MPC with the required correlations generated efficiently. Departing from prior approaches that generate Beaver triples silently (Boyle et al., 2019; 2020; 2022) or using homomorphic computation (Damgård et al., 2012) for Beaver style MPC, the focus of this work rests on efficiently generating a weaker correlation. In particular, using Sparse LPN we show that if we relax the correctness requirement in generating random Beaver triples to permit a tunably small inverse-polynomial error probability, such triples can be silently generated with arbitrarily small super-constant per-party computation. We then show that such correlations can be used in an efficient online phase similar to Beaver's protocol (with a tiny super-constant factor blow-up in communication).
Last updated:  2025-12-22
Zyga: Optimized Zero-Knowledge Proofs with Dynamic Public Inputs
Tiago A. O. Alves and Vitor Py Braga
We present Zyga, a pairing-based zero-knowledge proof system optimized for privacy-preserving DeFi applications. Our main contribution is an enhancement of existing zkSNARK constructions that enables dynamic public input substitution during verification while maintaining privacy of witness components through one-sided encoding. The one-sided encoding aspect favors practical deployment constraints on Solana and Ethereum where G2 scalar multiplications are computationally expensive. Zyga separates private values (blinded through trusted setup) from public values (instantiated on-chain), enabling ap- plications like private trading against current market rates without reproofing. We further introduce two-sided encoding, an extension that removes circuit structure restrictions by adding a private B com- mitment and a rebase mechanism, enabling arbitrary R1CS circuits with proof reuse across changing base values. We provide rigorous security analysis under discrete logarithm and q-Strong Diffie-Hellman assumptions, demonstrating computational soundness, zero-knowledge, and completeness. Performance analysis shows verification requires only 3–4 pairings with constant proof size, making it practical for blockchain deployment where transaction costs are critical.
Last updated:  2025-12-22
Ciphertext-Ciphertext Matrix Multiplication: Fast for Large Matrices
Jai Hyun Park
Matrix multiplication of two encrypted matrices (CC-MM) is a key challenge for privacy-preserving machine learning applications. As modern machine learning models focus on scalability, fast CC-MM on large datasets is increasingly in demand. In this work, we present a CC-MM algorithm for large matrices. The algorithm consists of plaintext matrix multiplications (PP-MM) and ciphertext matrix transpose algorithms (C-MT). We propose a fast C-MT algorithm, which is computationally inexpensive compared to PP-MM. By leveraging high-performance BLAS libraries to optimize PP-MM, we implement large-scale CC-MM with substantial performance improvements. Furthermore, we propose lightweight algorithms, significantly reducing the key size from $1\ 960$ MB to $1.57$ MB for CC-MM with comparable efficiency. In a single-thread implementation, the C-MT algorithm takes $0.76$ seconds to transpose a $2\ 048\times 2\ 048$ encrypted matrix. The CC-MM algorithm requires $85.2$ seconds to multiply two $4\ 096\times 4\ 096$ encrypted matrices. For large matrices, our algorithm outperforms the state-of-the-art CC-MM method from Jiang-Kim-Lauter-Song [CCS'18] by a factor of over $800$.
Last updated:  2025-12-22
Streaming Function Secret Sharing and Its Applications
Xiangfu Song, Jianli Bai, Ye Dong, Yijian Liu, Yu Zhang, Xianhui Lu, and Tianwei Zhang
Collecting statistics from users of software and online services is crucial to improve service quality, yet obtaining such insights while preserving individual privacy remains a challenge. Recent advances in function secret sharing (FSS) make it possible for scalable privacy-preserving measurement (PPM), which leads to ongoing standardization at the IETF. However, FSS-based solutions still face several challenges for streaming analytics, where messages are continuously sent, and secure computation tasks are repeatedly performed over incoming messages. We introduce a new cryptographic primitive called streaming function secret sharing (SFSS), a new variant of FSS that is particularly suitable for secure computation over streaming messages. We formalize SFSS and propose concrete constructions, including SFSS for point functions, predicate functions, and feasibility results for generic functions. SFSS powers several promising applications in a simple and modular fashion, including conditional transciphering, policy-hiding aggregation, and attribute-hiding aggregation. In particular, our SFSS formalization and constructions identify security flaws and efficiency bottlenecks in existing solutions, and SFSS-powered solutions achieve the expected security goal with asymptotically and concretely better efficiency and/or enhanced functionality.
Last updated:  2025-12-22
Attacking and Securing Hybrid Homomorphic Encryption Against Power Analysis
Aikata Aikata, Maciej Czuprynko, Nedžma Musovic, Emira Salkić, and Sujoy Sinha Roy
We present the first power side-channel analysis of a Hybrid Homomorphic Encryption (HHE) tailored symmetric encryption scheme. HHE combines lightweight client-side Symmetric Encryption (SE) with server-side homomorphic evaluation, enabling efficient privacy-preserving computation for the client and minimizing the communication overhead. Recent integer-based HHE designs such as PASTA, MASTA, HERA, and Rubato rely on prime-field arithmetic, but their side-channel security has not been studied. This gap is critical, as modular arithmetic and large key spaces in integer-based schemes introduce new leakage vectors distinct from those in conventional Boolean symmetric ciphers. In this work, we close this gap by presenting the first power side-channel analysis of an HHE-tailored scheme - HERA. Our results demonstrate a successful key recovery from as few as 40 power traces using Correlation Power Analysis. In addition to showing that such attacks are feasible, we develop the first masking framework for integer-based SE schemes to mitigate them. Our design integrates PINI-secure gadgets with assembly-level countermeasures to address transition leakage, and we validate its effectiveness using the Test Vector Leakage Assessment. Our experiments confirm both the practicality of the attack and the strength of the proposed countermeasures. We also demonstrate that the framework extends to other integer-based HHE schemes, by applying our technique to PASTA. Thus, we provide leakage models, identify relevant attack targets, and define evaluation benchmarks for integer-based HHE-tailored SE schemes, thereby filling a longstanding gap and laying the foundation for side-channel-resilient design in this area.
Last updated:  2025-12-22
High-Performance SIMD Software for Spielman Codes in Zero-Knowledge Proofs
Florian Krieger, Christian Dobrouschek, Florian Hirner, and Sujoy Sinha Roy
We present the first high-performance SIMD software implementation of Spielman codes for their use in polynomial commitment schemes and zero-knowledge proofs. Spielman codes, as used in the Brakedown framework, are attractive alternatives to Reed-Solomon codes and benefit from linear-time complexity and field agnosticism. However, the practical deployment of Spielman codes has been hindered by a lack of research on efficient implementations. The involved costly finite-field arithmetic and random memory accesses operate on large volumes of data, typically exceeding gigabytes; these pose significant challenges for performance gains. To address these challenges, we propose several computational and memory-related optimizations that together reach an order-of-magnitude performance improvement in software. On the computation side, we propose SIMD optimizations using the AVX-512-IFMA instruction set and introduce a lazy reduction method to minimize the modular arithmetic cost. On the memory side, we implement a cache-friendly memory layout and a slicing technique, which exploit the CPU memory hierarchy. Finally, we present our multithreading approach to improve throughput without saturating memory bandwidth. Compared to prior Spielman software, our optimizations achieve speedups of up to 26.7x and 20.6x for single- and multi-threaded execution, respectively. In addition, instantiating our software with 64 threads on a high-end CPU even outperforms a recent FPGA accelerator by up to 4.3x for small and mid-sized polynomials. Our improvements make Spielman codes competitive with well-optimized Reed-Solomon codes on software platforms.
Last updated:  2025-12-22
Gravity of the Situation:Security Analysis on Rocket.Chat E2EE
Hayato Kimura, Ryoma Ito, Kazuhiko Minematsu, and Takanori Isobe
Rocket.Chat is a group chat platform widely deployed in industries and national organizations, with over 15 million users across 150 countries. One of its main features is an end-to-end encryption (E2EE) protocol; however, no cryptographic security analysis has been conducted. We conduct an in-depth cryptographic analysis of Rocket.Chat's E2EE protocol and identify multiple significant flaws that allow a malicious server or even an outsider to break the confidentiality and integrity of the group chat. Specifically, we formally model and analyze the protocol using ProVerif under the Dolev-Yao model, uncovering multiple theoretical weaknesses and verifying that some of them lead to practical attacks. Furthermore, through meticulous manual analysis, we identify additional vulnerabilities, including implementation flaws and cryptographic weaknesses such as CBC malleability, and demonstrate how they are exploitable in practical attack scenarios. To validate our findings, we develop Proof-of-Concept implementations, highlighting the real-world feasibility of these attacks. We also propose mitigation techniques and discuss the implications of our attacks.
Last updated:  2025-12-22
The Nonlinear Filter Model of Stream Cipher Redivivus
Claude Carlet and Palash Sarkar
The nonlinear filter model is an old and well understood approach to the design of secure stream ciphers. Extensive research over several decades has shown how to attack stream ciphers based on this model and has identified the security properties required of the Boolean function used as the filtering function to resist such attacks. This led to the problem of constructing Boolean functions which provide adequate security and at the same time are efficient to implement. Unfortunately, over the last two decades no fully satisfactory solutions to this problem appeared in the literature. The lack of good solutions has effectively led to the nonlinear filter model becoming more or less obsolete. This is a big loss to the cryptographic design toolkit, since the great advantages of the nonlinear filter model are its simplicity, well understood security and the potential to provide low cost solutions for hardware oriented stream ciphers. In this paper, we revive the nonlinear filter model by constructing appropriate Boolean functions which provide required security and are also efficient to implement. We put forward concrete suggestions of stream ciphers which are $\kappa$-bit secure against known types of attacks for $\kappa=80$, 128, 160, 192, 224 and 256. For the 80-bit and the 128-bit security levels, the gate count estimates of our proposals compare quite well to the famous ciphers Trivium and Grain-128a respectively, while for the 256-bit security level, we do not know of any other stream cipher design which has such a low gate count.
Last updated:  2025-12-21
Far-Field $Singing$ FPGAs: Repurposing Routing Fabrics into 100 m Covert Radiators
Udi Alush, Roey Amitay, Erez Danieli, and Itamar Levi
FPGAs rely on highly dense and symmetric internal routing networks to interconnect their configurable logic ele- ments. In standard applications, these interconnects are used solely for digital signal transfer within the device, leaving many routing paths idle. We study the surprising ability of configurable FPGA routing fabrics to act as intentional radiators when struc- tured and driven coherently. Building on prior near-field demon- strations (few centimeters), we (i) present a practical toolchain and methodology for synthesizing “fabric-only” antennas using constrained placement/routing; (ii) demonstrate reliable far-field reception for extremely long ranges (≤ 100 m) and quantified bit-error performance at meter-scale ranges using ASK/FSK modulation and simple ECC; and (iii) analyze the security implications by formalizing adversary capabilities, enumerating novel multi-tenant attack vectors, and outlining detection and mitigation strategies. Our work bridges implementation engineer- ing, complex physical-layer measurement (with a set of complex Far-Field measurement apparatus), and security analysis, and highlights the urgent need for screening and runtime monitoring in shared FPGA environments. We have systematically shaped and combined unused paths into a contiguous structure, such as {Fractal, loop, Dipole, Snake, Spiral, Array}-Antennas, which required building an automation tool-chain. When energized, this embedded structure emits measurable electromagnetic energy that can serve as a stealth communication channel. We’ve extended this concept far beyond previous near-field demonstra- tions, achieving reliable reception in the Far-Field, demonstrated rigorously with various measurements setups - a first for this class of long-range FPGA-based antennas without any external radiating RF hardware from a tiny ∼ 1x1 cm2 device. We further show a Trojan example while triggering it with rare events attacking a Decryption Oracle model
Last updated:  2025-12-21
ALKAID: Accelerating Three-Party Boolean Circuits by Mixing Correlations and Redundancy
Ye Dong, Xudong Chen, Xiangfu Song, Yaxi Yang, Wen-jie Lu, Tianwei Zhang, Jianying Zhou, and Jin-Song Dong
Secure three-party computation (3PC) with semi-honest security under an honest majority offers notable efficiency in computation and communication; for Boolean circuits, each party sends a single bit for every AND gate, and nothing for XOR. However, round complexity remains a significant challenge, especially in high-latency networks. Some works can support multi-input AND and thereby reduce online round complexity, but they require \textit{exponential} communication for generating the correlations in either preprocessing or online phase. How to extend the AND gate to multi-input while maintaining high correlation generation efficiency is still not solved. To address this problem, we propose a round-efficient 3PC framework ALKAID for Boolean circuits through improved multi-input AND gate. By mixing correlations and redundancy, we propose a concretely efficient correlation generation approach for small input bits $N<4$ and shift the correlation generation to the preprocessing phase. Building on this, we create a round-efficient AND protocol for general cases with $N>4$. Exploiting the improved multi-input AND gates, we design fast depth-optimized parallel prefix adder and share conversion primitives in 3PC, achieved with new techniques and optimizations for better concrete efficiency. We further apply these optimized primitives to enhance the efficiency of secure non-linear functions in machine learning. We implement ALKAID and extensively evaluate its performance. Compared to state of the arts like ABY3 (CCS'2018), Trifecta (PoPETs'2023), and METEOR (WWW'2023), ALKAID enjoys $1.5\times$--$2.5\times$ efficiency improvements for boolean primitives and non-linear functions, with better or comparable communication.
Last updated:  2025-12-20
Yoyo tricks with a BEANIE
Xavier Bonnetain, Sébastien Duval, Virginie Lallemand, Thierno Mamoudou Sabaly, Thomas Sagot, and Thibault Sanvoisin
BEANIE is a 32-bit tweakable block cipher, published in ToSC 2025.4, designed for memory encryption of microcontroller units. In this paper, we propose its first third-party analysis and present a key recovery against the full 5+5 rounds of BEANIE using a yoyo distinguisher. The attack has a cost close to the security claim of $2^{80}$ time and $2^{40}$ data.
Last updated:  2025-12-20
SoK: Verifiable Federated Learning
Francesco Bruschi, Marco Esposito, Tommaso Gagliardoni, and Andrea Rizzini
Federated Learning (FL) is an advancement in Machine Learning motivated by the need to preserve the privacy of the data used to train models. While it effectively addresses this issue, the multi-participant paradigm on which it is based introduces several challenges. Among these are the risks that participating entities may behave dishonestly and fail to perform their tasks correctly. Moreover, due to the distributed nature of the architecture, attacks such as Sybil and collusion are possible. Recently, with advances in Verifiable Computation (VC) and Zero-Knowledge Proofs (ZKP), researchers have begun exploring how to apply these technologies to Federated Learning aiming to mitigate such problems. In this Systematization of Knowledge, we analyze the first, very recent works that attempt to integrate verifiability features into classical FL tasks, comparing their approaches and highlighting what is achievable with the current state of VC methods.
Last updated:  2025-12-20
Who Verifies the Verifiers? Lessons Learned From Formally Verified Line-Point Zero-Knowledge
Sabine Oechsner, Vitor Pereira, and Peter Scholl
Computer-aided cryptography, with particular emphasis on formal verification, promises an interesting avenue to establish strong guarantees about cryptographic primitives. The appeal of formal verification is to replace the error-prone pen-and-paper proofs with a proof that was checked by a computer and, therefore, does not need to be checked by a human. In this paper, we ask the question of how reliable are these machine-checked proofs by analyzing a formally verified implementation of the Line-Point Zero-Knowledge (LPZK) protocol (Dittmer, Eldefrawy, Graham-Lengrand, Lu, Ostrovsky and Pereira, CCS 2023). The implementation was developed in EasyCrypt and compiled into OCaml code that was claimed to be high-assurance, i.e., that offers the formal guarantees of guarantees of completeness, soundness, and zero knowledge. We show that despite these formal claims, the EasyCrypt model was flawed, and the implementation (supposed to be high-assurance) had critical security vulnerabilities. Concretely, we demonstrate that: 1) the EasyCrypt soundness proof was incorrectly done, allowing an attack on the scheme that leads honest verifiers into accepting false statements; and 2) the EasyCrypt formalization inherited a deficient model of zero knowledge for a class of non-interactive zero knowledge protocols that also allows the verifier to recover the witness. In addition, we demonstrate 3) a gap in the proof of the perfect zero knowledge property of the LPZK variant of Dittmer, Ishai, Lu and Ostrovsky (CCS 2022) that the EasyCrypt proof is based, which, depending on the interpretation of the protocol and security claim, could allow a malicious verifier to learn the witness. Our findings highlight the importance of scrutinizing machine-checked proofs, including their models and assumptions. We offer lessons learned for both users and reviewers of tools like EasyCrypt, aimed at improving the transparency, rigor, and accessibility of machine-checked proofs. By sharing our methodology and challenges, we hope to foster a culture of deeper engagement with formal verification in the cryptographic community.
Last updated:  2025-12-20
A Unified Key Recovery Framework for Impossible Boomerang Attacks: Applications to Full-Round-ARADI and SKINNYe v2
Xichao Hu, Lin Jiao, Dengguo Feng, Yongqiang Li, Senpeng Wang, Yonglin Hao, and Xinxin Gong
The impossible boomerang attack is a powerful cryptanalytic technique, but existing key recovery methods face several limitations that restrict its applicability. Specifically, the key pre-guessing is coarse-grained, S-box details are ignored in the differential propagation, the complexity estimation and the key guessing order determination remain rudimentary. To overcome these issues, we introduce three key improvement measures. First, we propose a flexible partial key and difference pre-guessing technique based on directed graphs, enabling selective identification of required keys and differences for generating partial pairs and quartets. Second, we propose a pre-sieving technique to early eliminate invalid quartets by exploiting cipher-specific details. Third, we introduce an automatic key-guessing strategy based on the same directed graphs to efficiently determine valid guessing orders. We integrate these techniques to develop a unified key recovery framework for impossible boomerang attacks, accompanied by a formal and precise characterization of the overall complexity. This is the first framework to support flexible key and difference pre-guessing while incorporating block cipher details during key recovery for impossible boomerang attacks. Crucially, it enables the automatic generation of detailed recovery steps, a capability missing in prior work. As applications, under the four related-key/tweakey setting, we apply the framework to \ARADI{}, a low-latency cipher proposed by the National Security Agency (NSA), and \SKV{}, a threshold-implementation-friendly cipher proposed at EUROCRYPT 2020. For \ARADI{}, we achieve the first full-round attack with $2^{130}$ data, $2^{253.78}$ time, and $2^{235.75}$ memory complexity. For \SKV{}, we present the first 34-round impossible boomerang attack with $2^{66}$ data, $2^{253.75}$ time, and $2^{239.75}$ memory complexity. These results demonstrate the framework’s significance and its substantial improvement in advancing the impossible boomerang attack.
Last updated:  2025-12-20
Quantum Periodic Distinguisher Construction: Symbolization Method and Automated Tool
Qun Liu, Haoyang Wang, Jinliang Wang, Boyun Li, and Meiqin Wang
As one of the famous quantum algorithms, Simon's algorithm enables the efficient derivation of the period of periodic functions in polynomial time. However, the complexity of constructing periodic functions has hindered the widespread application of Simon's algorithm in symmetric-key cryptanalysis. Currently, aside from the exhaustive search-based testing method introduced by Canale et al. at CRYPTO 2022, there is no unified model for effectively searching for periodic distinguishers. Although Xiang et al. established a link between periodic functions and truncated differential theory at ToSC 2024, their approach lacks the ability to construct periods using unknown differentials and does not provide automated tools. This limitation underscores the inadequacy of existing methods in identifying periodic distinguishers for complex structures. In this paper, we address the challenge of advancing periodic distinguishers for symmetric-key ciphers. First, we propose a more generalized method for constructing periodic distinguishers, addressing the limitations of Xiang et al.'s theory in handling unknown differences. We further extend it to probabilistic periodic distinguishers. As a result, our method can cover a wider range of periodic distinguishers. Second, we introduce a novel symbolic representation to simplify the search for periodic distinguishers, and propose the first fully automated SMT-based search model, which efficiently addresses the challenges of manual searching in complex structures. Based on our method, we have achieved new quantum distinguishers with the following round configurations: 10 rounds for GFS-4F, 10 rounds for LBlock, 10 rounds for TWINE, and 16 rounds for Skipjack-B, improving the previous best results by 1, 2, 2, and 3 rounds, respectively. Our model also identifies the first 7/8/9-round periodic distinguishers for SKINNY. Compared with existing distinguishers (Hadipour et al., CRYPTO 2024) with the same round in the classical setting, our distinguishers achieve lower data complexity.
Last updated:  2025-12-20
An Ideal Linear Secret Sharing Scheme for Complete $t$-Partite $k$-Uniform Hypergraph Access Structures
Chunming Tang, Zheng Chen, Haonan Fu, and Hongwei Zhu
Secret sharing schemes represent a crucial cryptographic protocol, with linear codes serving as a primary tool for their construction. This paper systematically investigates the construction of ideal secret sharing schemes for complete $t$-partite $k$-uniform hypergraph access structures using linear codes as the tool. First, it is proved that the generator matrix $G$ of an ideal linear code realizing a complete $t$-partite $2$-uniform hypergraph access structure must have a rank of $2$. Simultaneously, a novel method for constructing an ideal secret sharing scheme that realizes such access structures is proposed. Building on this foundation, the case of complete $t$-partite $2$-uniform hypergraphs is extended to complete $t$-partite $k$-uniform hypergraphs, and a method for constructing ideal secret sharing schemes to realize them is provided. Compared with existing approaches, both Shamir’s method and the scheme proposed by Brickell et al. are special cases of our proposed approach.
Last updated:  2025-12-19
Fully Distributed Multi-Point Functions for PCGs and Beyond
Amit Agarwal, Srinivasan Raghuraman, and Peter Rindal
We introduce new {Distributed Multi-Point Function} (DMPF) constructions that make multi-point sharing as practical as the classic single-point (DPF) case. Our main construction, {Reverse Cuckoo}, replaces the ``theoretical'' cuckoo insertions approach to DMPFs with a MPC-friendly linear solver that circumvents the concrete inefficiencies. Combined with our new sparse DPF construction, we obtain the first fully distributed and efficient DMPF key generation that avoids trusted dealers and integrates cleanly with standard two-party MPC. Applied to pseudorandom correlation generators (PCGs), our DMPFs remove the dominant “sum of $t$ DPFs'' bottleneck. In Ring-LPN and Stationary-LPN pipelines (Crypto 2020, 2025), this translates to {an order of magnitude more Beaver triples per second} with {an order of magnitude less communication} compared to the status quo by Keller et al (Eurocrypt 2018). The gains persist across fields and rings ($\mathbb{F}_{p^k}$, $\mathbb{Z}_{2^k}$ for $k\geq 1$) and are complementary to existing PCG frameworks: our constructions drop in as a black-box replacement for their sparse multi-point steps, accelerating {all} PCGs that rely on such encodings. We provide a complete protocol suite (deduplication, hashing, linear solver, sparse DPF instantiation) with a semi-honest security proof via a straight-line simulator that reveals only hash descriptors and aborts with negligible (cuckoo-style) probability. A prototype implementation validates the asymptotics with strong concrete performance improvements.
Last updated:  2025-12-19
TAPAS: Datasets for Learning the Learning with Errors Problem
Eshika Saxena, Alberto Alfarano, François Charton, Emily Wenger, and Kristin Lauter
AI-powered attacks on Learning with Errors (LWE), an important hard math problem in post-quantum cryptography, rival or outperform "classical" attacks on LWE under certain parameter settings. Despite the promise of this approach, a dearth of accessible data limits AI practitioners' ability to study and improve these attacks. Creating LWE data for AI model training is time- and compute-intensive and requires significant domain expertise. To fill this gap and accelerate AI research on LWE attacks, we propose the TAPAS datasets, a Toolkit for Analysis of Post-quantum cryptography using AI Systems. These datasets cover several LWE settings and can be used off-the-shelf by AI practitioners to prototype new approaches to cracking LWE. This work documents TAPAS dataset creation, establishes attack performance baselines, and lays out directions for future work.
Last updated:  2025-12-19
LAKE: Lattice-Code Accelerated Kyber Encapsulation
Hassan Nasiraee
The standardization of CRYSTALS-Kyber (ML-KEM) by NIST represents a milestone in post-quantum security, yet its substantial communication overhead remains a critical bottleneck for resource-constrained environments. This paper introduces <i>LAKE (Lattice-Code Accelerated Kyber Encapsulation)</i>, a novel cryptographic framework that symbiotically integrates coding theory into the Module-LWE structure. Unlike previous concatenation approaches, LAKE embeds density-optimized Construction-A lattices derived from Polar codes directly into the public matrix generation. This structural innovation yields a <i>15–25% reduction in ciphertext size</i> while simultaneously improving the Decryption Failure Rate (DFR) from \(2^{-139}\) to <i>\(2^{-156}\)</i>, leveraging innate coding gains to suppress noise. We provide a rigorous reduction of LAKE's IND-CCA2 security to the hardness of the Structured Module-LWE problem. Although LAKE introduces a modest 8–15% computational overhead, it optimizes the critical "Compute-for-Bandwidth" trade-off, exploiting the asymmetry between low-cost local processing and high-cost transmission. Consequently, LAKE significantly enhances deployment viability in high-latency, energy-sensitive domains such as Satellite Communications (SatCom), Narrowband-IoT (NB-IoT), and tactical edge networks, where transmission efficiency is the dominant performance metric.
Last updated:  2025-12-19
Key Recovery Attacks on ZIP Ciphers: Application to ZIP-AES and ZIP-GIFT
Marcel Nageler, Debasmita Chakraborty, Simon Scherer, and Maria Eichlseder
The construction of building beyond-birthday-bound secure pseudorandom functions (PRFs) from the Xor-sum of 2 pseudorandom permutations (PRPs) has been known since EUROCRYPT 1998. However, the first concrete instance was only published recently at FSE 2022: the low latency PRF Orthros. Subsequently, at ASIACRYPT 2024, Flórez-Gutiérrez et al. proposed the general framework of ZIP ciphers, where a block cipher $E_{1} \circ E_{0}$ is used to construct the PRF $E_{0} \oplus E_{1}^{-1}$. This allows re-using some of the cryptanalysis of the underlying block cipher. They propose the PRF ZIP-AES, as the Xor sum of 5 AES encryption rounds and 5 decryption rounds. They discuss differential, linear, and integral distinguishers for this construction, but provide no concrete key recovery attacks. Furthermore, they propose ZIP-GIFT as a 64-bit PRF but leave cryptanalysis as future work. In this work, we provide the first third-party analysis of ZIP-AES and ZIP-GIFT. We focus our efforts on the unique challenges of performing key recovery attacks for ZIP ciphers and propose new techniques to overcome these challenges. We show differential, linear, and integral key recovery attacks for both PRFs. We develop new techniques for integral key recovery attacks and show how to extend differential characteristics by some rounds for key recovery.
Last updated:  2025-12-19
Refined Modelling of the Primal Attack, and Variants Against Module-Learning With Errors
Paola de Perthuis and Filip Trenkić
The primal attack reduces Learning with Errors (LWE) to the unique Shortest Vector Problem (uSVP), and then applies lattice reduction such as BKZ to solve the latter. Estimating the cost of the attack is required to evaluate the security of constructions based on LWE. Existing fine-grained estimators for the cost of the primal attack, due to Dachman-Soled--Ducas--Gong--Rossi (CRYPTO 2020) and Postlethwaite--Virdia (PKC 2021), differ from experimental data as they implicitly assume the unique shortest vector is resampled several times during the attack, changing its length. Furthermore, these estimators consider only the first two moments of the LWE secret and error, and therefore do not differentiate between distinct centred distributions with equal variances. We remedy both issues by initially fixing the short vector's length, and later integrating over its distribution. We provide extensive experimental evidence that our estimators are more accurate and faithfully capture the behaviour of different LWE distributions. In the case of Module-LWE, lattice reduction utilising the module structure could lead to cheaper attacks. We build upon the analysis of module lattice reduction by Ducas--Engelberts--Perthuis (Asiacrypt 2025), providing a simulator for Module-BKZ generalising the BKZ simulator of Chen--Nguyen (Asiacrypt 2011). We design estimators for a module variant of the primal attack, supporting our analysis with experimental evidence. Asymptotically, we show the module primal attack over a degree $d$ number field $K$ has a reduced cost, resulting in a subexponential gain, whenever the discriminant $\Delta_K$ satisfies $\vert \Delta_K \vert < d^d$, one such case being non-power-two cyclotomics.
Last updated:  2025-12-19
Towards Practical Multi-Party Hash Chains using Arithmetization-Oriented Primitives - With Applications to Threshold Hash-Based Signatures
Uncategorized
Alexandre Adomnicăi
Show abstract
Uncategorized
Despite their simplicity and quantum-resistant security properties, the deployment of hash chains in distributed settings through secure multi-party computation (MPC) has been demonstrated to be impractical when employing traditional hash functions (i.e., SHA2/SHA3) due to their high number of non-linear gates which lead to heavy computational costs. In this work, we present a comprehensive evaluation of hash chain computations over MPC using arithmetization-oriented (AO) primitives, specifically focusing on the Poseidon2 family of hash functions. We systematically analyze the MPC-friendliness of various Poseidon2 instantiations across different prime fields and parameter choices to minimize both multiplicative depth and preprocessing requirements. We conduct extensive benchmarks using the MP-SPDZ framework across three state-of-the-art MPC protocols under varying network conditions and adversarial models. We further explore practical applications to threshold cryptography, presenting optimized implementations of threshold hash-based signatures that achieve signing times less than 1 second in a 3-party setting for practical parameter sets. Specifically, we demonstrate how structural parallelism in hash-based signatures can be exploited to batch independent hash chains within a single MPC execution, and introduce a time-memory trade-off that enables non-interactive online signature generation through systematic precomputation of all chain intermediates. Our work suggests the practical viability of moderate length AO-based hash chains for MPC applications.
Last updated:  2025-12-19
Fourier Sparsity of Delta Functions and Matching Vector PIRs
Fatemeh Ghasemi and Swastik Kopparty
In this paper we study a basic and natural question about Fourier analysis of Boolean functions, which has applications to the study of Matching Vector based Private Information Retrieval (PIR) schemes. For integers $m,r$, define a delta function on $\{0,1\}^r \subseteq \mathbb{Z}_m^r$ to be a function $f: \mathbb{Z}_m^r \to \mathbb C$ if $f(0) = 1$ and $f(x) = 0$ for all nonzero Boolean $x$. The basic question that we study is how small can the Fourier sparsity of a delta function be; namely, how sparse can such an $f$ be in the Fourier basis? In addition to being intrinsically interesting and natural, such questions arise naturally while studying "$S$-decoding polynomials" for the known matching vector families. Finding $S$-decoding polynomials of reduced sparsity -- which corresponds to finding delta functions with low Fourier sparsity -- would improve the current best PIR schemes. We show nontrivial upper and lower bounds on the Fourier sparsity of delta functions. Our proofs are elementary and clean. These results imply limitations on improvements to the Matching Vector PIR schemes simply by finding better $S$-decoding polynomials. In particular, there are no $S$-decoding polynomials which can make Matching Vector PIRs based on the known matching vector families achieve polylogarithmic communication for constantly many servers. Many interesting questions remain open.
Last updated:  2025-12-19
Achieving CPAD security for BFV: a pragmatic approach
Jean-Paul Bultel, Marina Checri, Caroline Fontaine, Marc Renard, Renaud Sirdey, and Oana Stan
Fully Homomorphic Encryption (FHE) aims at ensuring privacy of sensitive data while taking advantage of external computations and services. However, using FHE in real-world scenarios reveals new kinds of security issues. In particular, following Li&Micciancio Eurocrypt'21 seminal paper, CPAD security has emerged as a fundamental notion for FHE, unveiling a subtle interplay between security and correctness. For correct (F)HE schemes, CPA security already implies CPAD. However, all known practical FHE schemes are (R)LWE-based and, as such, are prone to decryption errors; and even if it is possible to ensure statistical correctness by selecting appropriate parameters, achieving this while maintaining malleability --- the mainspring of FHE --- still remains challenging. Moreover, practical CPAD attacks have recently been designed against most known FHE schemes. We propose in this paper a complete, simple and rigorous framework to reach CPAD security for one of them, BFV. Our approach relies on a combination of alternate average-case/worst-case noise variance monitoring --- based on dependencies tracking during the homomorphic calculations --- and on smudging. It comes with an automated parameters setting methodology, which connects it to the recently proposed Application-Aware HE paradigm while relieving libraries end-users from the burden of enforcing the paradigm's constraints by hand.
Last updated:  2025-12-19
MIOPE: A Modular framework for Input and Output Privacy in Ensemble inference
Kyrian Maat, Gareth T. Davies, Zoltán Ádám Mann, Joppe W. Bos, and Francesco Regazzoni
We introduce a simple yet novel framework for privacy-preserving machine learning inference that allows a client to query multiple models without a trusted third party aggregator by leveraging homomorphically encrypted model evaluation and multi-party computation. This setting allows for dispersed training of models such that a client can query each separately, and aggregate the results of this `ensemble inference'; this avoids the data leakage inherent to techniques that train collectively such as federated learning. Our framework, which we call MIOPE, allows the data providers to keep the training phase local to provide tighter control over these models, and additionally provides the benefit of easily retraining to improve inference of the ensemble. MIOPE uses homomorphic encryption to keep the querying client's data private and multi-party computation to hide the individual model outputs. We illustrate the design and trade-offs of input- and output-hiding ensemble inference as provided by MIOPE and compare performance to a centralized approach.We evaluate our approach with a standard dataset and various regression models and observe that the MIOPE framework can lead to accuracy scores that are only marginally lower than centralized learning. The modular design of our approach allows the system to adapt to new data, better models, or security requirements of the involved parties.
Last updated:  2025-12-19
On Reed–Solomon Proximity Gaps Conjectures
Elizabeth Crites and Alistair Stewart
We disprove a range of conjectures for Reed-Solomon codes underpinning the security and efficiency of many modern proof systems, including SNARKs based on FRI (Ben-Sasson-Bentov-Horesh-Riabzev, ICALP’18), DEEP-FRI (Ben-Sasson-Goldberg-Kopparty-Saraf, ITCS’20), STIR (Arnon-Chiesa-Fenzi-Yogev, CRYPTO’24), and WHIR (Arnon-Chiesa-Fenzi-Yogev, preprint). Concretely, we prove that the following conjectures are false: 1. The correlated agreement up-to-capacity conjecture of Ben-Sasson-Carmon-Ishai-Kopparty-Saraf (J. ACM’23), 2. The mutual correlated agreement up-to-capacity conjecture of WHIR, 3. The list-decodability up-to-capacity conjecture of DEEP-FRI, which follows from existing results in the literature. We then propose minimal modifications to these conjectures up to the list-decoding capacity bound. Our second main contribution is a proof that correlated agreement with small enough error probability implies list decoding of Reed-Solomon codes. Thus, any future positive results on our correlated agreement conjectures with small enough error probability would imply similar results in classical list decoding. A reduction from proximity gaps to list-decodability was heretofore a natural open problem.
Last updated:  2025-12-19
Improving the Efficiency of zkSNARKs for Ballot Validity
Felix Röhr, Nicolas Huber, and Ralf Küsters
Homomorphic tallying in secure e-voting protocols enables privacy-preserving vote aggregation. For this approach, zero-knowledge proofs (ZKPs) for ensuring the validity of encrypted ballots are an essential component. While it has been common to construct tailored ZKPs for every kind of ballot and voting method at hand, recently Huber et al. demonstrated that also general-purpose ZKPs (GPZKPs), such as Groth16 zkSNARKs, are suited for checking ballot validity. Unlike tailored solutions, GPZKPs provide a unified, generic, and flexible framework for this task. In this work, we improve on the initial GPZKPs for ballot validity proposed by Huber et al. Specifically, we present several circuit-level optimizations that significantly reduce proving costs for exponential ElGamal-encrypted ballots. We provide an independent, ready-to-use Circom implementation along with concrete benchmarks, demonstrating substantial improvements in performance and practical usability over prior implementations.
Last updated:  2025-12-19
Turning Multiple Key-Dependent Attacks into Universal Attacks
Hosein Hadipour, Yosuke Todo, Mostafizar Rahman, Maria Eichlseder, Ravi Anand, and Takanori Isobe
Key-dependent attacks are effective only for specific weak-key classes, limiting their practical impact. We present a generic statistical framework that combines multiple key-dependent distinguishers into universal attacks covering the full key space. Using log-likelihood ratio statistics, our framework tests the secret key against multiple weak-key distinguishers, aggregates their evidence to determine whether the key is weak or strong for each distinguisher, and exploits this classification to reduce the effective key entropy for key recovery. We apply this to Orthros-PRF, a sum-of-permutations (SoP) design where any differential-based distinguisher holds only for a fraction of keys. This yields the first universal 8-round differential-linear (DL) key-recovery attack with median time complexity $2^{119.58}$, whereas prior work reached at most 7 rounds in the weak-key setting. To discover the required distinguishers, we extend the open-source S-box Analyzer tool with MILP support for deterministic propagation and develop a model integrating distinguisher search with key recovery. This enables automated discovery of multidimensional DL distinguishers covering up to 10 rounds in each Orthros branch, improving prior work by 4 rounds. Our results demonstrate that statistical aggregation of multiple weak-key distinguishers enables effective universal cryptanalysis. Our framework is generic and is applicable to other primitives with multiple identifiable weak-key classes.
Last updated:  2025-12-19
Laminate: Succinct SIMD-Friendly Verifiable FHE
Kabir Peshawaria, Zeyu Liu, Ben Fisch, and Eran Tromer
In outsourcing computation to untrusted servers, one can cryptographically ensure privacy using Fully Homomorphic Encryption (FHE) or ensure integrity using Verifiable Computation (VC) such as SNARK proofs. While each is practical for some applications in isolation, efficiently composing FHE and VC into Verifiable Computing on Encrypted Data (VCoED) remains an open problem. We introduce Laminate, the first practical method for adding integrity to BGV-style FHE, thereby achieving VCoED. Our approach combines the blind interactive proof framework with a tailored variant of the GKR proof system that avoids committing to intermediate computation states. We further introduce variants employing transcript packing and folding techniques. The resulting encrypted proofs are concretely succinct: 270kB, compared to 1TB in prior work, to evaluate a batch of $B=2^{14}$ instances of size $n=2^{20}$ and depth $d=32$. Asymptotically, the proof size and verifier work is $O(d \log (Bn))$, compared to $\Omega(BN\log n)$ in prior work (for ring dimension $N$). Unlike prior schemes, Laminate utilizes the full SIMD capabilities of FHE for both the payload circuit evaluation and proof generation; adds only constant multiplicative depth on top of payload evaluation while performing $\tilde{O}(n)$ FHE operations; eliminates the need for witness reduction; and is field-agnostic. The resulting cost of adding integrity to FHE, compared to assuming honest evaluation, is ${\sim}12\times$ to ${\sim}36\times$ overhead (for deep multiplication-heavy circuits of size $2^{20}$), which is $>500\times$ faster than the state-of-the-art.
Last updated:  2025-12-19
Meta-PBS: Compact High-Precision Programmable Bootstrapping
Shihe Ma, Tairong Huang, Anyu Wang, Changtong Xu, Tao Wei, and Xiaoyun Wang
Currently, most FHE schemes realize bootstrapping through the linear-decrypt-then-round paradigm. For the programmable bootstrapping (PBS) of TFHE, this means the lookup table (LUT) needs a redundancy of $O(\sqrt{N})$ to be able to remove the modulus switching noise, which limits the plaintext modulus of PBS to $O(\sqrt{N})$. We remove this requirement for redundancy by proposing the Meta-PBS framework, which allows us to start with under-redundant or non-redundant LUTs. Meta-PBS iteratively blind-rotates the LUT, during which the LUT redundancy gradually increases. The bootstrapping outputs the correct result when the redundancy eventually exceeds the noise bound. Asymptotically, Meta-PBS requires $O(1)$ blind rotations in dimension $N$ to evaluate a negacyclic function modulo $2N$, whereas PBS needs $O(\sqrt{N})$ blind rotations. Meta-PBS also enjoys an additive noise growth, allowing for more homomorphic arithmetic on bootstrapped ciphertext. We modified Meta-PBS to support the simultaneous evaluation of multiple LUTs on the same ciphertext and/or arbitrary LUTs. According to our implementation, when evaluating a 12-bit negacyclic function, Meta-PBS outperforms EBS (PKC'23) by 79 times. When evaluating an arbitrary function on an 8-bit LWE ciphertext, Meta-PBS reduces the running time of the Refined LHE (CCS'25) by half while allowing for a 27 times larger post-bootstrap linear combination.
Last updated:  2025-12-19
PULSE: Parallel Private Set Union for Large-Scale Entities
Jiahui Gao, Son Nguyen, Marina Blanton, and Ni Trieu
Multi-party private set union (mPSU) allows multiple parties to compute the union of their private input sets without revealing any additional information. Existing efficient mPSU protocols can be categorized into symmetric key encryption (SKE)-based and public key encryption (PKE)-based approaches. However, neither type of mPSU protocol scales efficiently to a large number of parties, as they fail to fully utilize available computational resources, leaving participants idle during various stages of the protocol execution. This work examines the limitation of existing protocols and proposes a unified framework for designing efficient mPSU protocols. We then introduce an efficient Parallel mPSU for Large-Scale Entities (PULSE) that enables parallel computation, allowing all parties/entities to perform computations without idle time, leading to significant efficiency improvements, particularly as the number of parties increases. Our protocol is based on PKE and secure even when up to $n-1$ semi-honest parties are corrupted. We implemented PULSE and compared it to state-of-the-art mPSU protocols under different settings, showing a speedup of $1.91$ to $3.57\times$ for $n=8$ parties for various set sizes.
Last updated:  2025-12-19
Cryptanalysis of Pseudorandom Error-Correcting Codes
Tianrui Wang, Anyu Wang, Tianshuo Cong, Delong Ran, Jinyuan Liu, and Xiaoyun Wang
Pseudorandom error-correcting codes (PRC) is a novel cryptographic primitive proposed at CRYPTO 2024. Due to the dual capability of pseudorandomness and error correction, PRC has been recognized as a promising foundational component for watermarking AI-generated content. However, the security of PRC has not been thoroughly analyzed, especially with concrete parameters or even in the face of cryptographic attacks. To fill this gap, we present the first cryptanalysis of PRC. We first propose three attacks to challenge the undetectability and robustness assumptions of PRC. Among them, two attacks aim to distinguish PRC-based codewords from plain vectors, and one attack aims to compromise the decoding process of PRC. Our attacks successfully undermine the claimed security guarantees across all parameter configurations. Notably, our attack can detect the presence of a watermark with overwhelming probability at a cost of $2^{22}$ operations. We also validate our approach by attacking real-world large generative models such as DeepSeek and Stable Diffusion. To mitigate our attacks, we further propose three defenses to enhance the security of PRC, including parameter suggestions, implementation suggestions, and constructing a revised key generation algorithm. Our proposed revised key generation function effectively prevents the occurrence of weak keys. However, we highlight that the current PRC-based watermarking scheme still cannot achieve a 128-bit security under our parameter suggestions due to the inherent configurations of large generative models, such as the maximum output length of large language models.
Last updated:  2025-12-18
When Simple Permutations Mix Poorly: Limited Independence Does Not Imply Pseudorandomness
Jesko Dujmovic, Angelos Pelecanos, and Stefano Tessaro
Over the past two decades, several works have used (almost) $k$-wise independence as a proxy for pseudorandomness in block ciphers, since it guarantees resistance against broad classes of statistical attacks. For example, even the case $k = 2$ already implies security against differential and linear cryptanalysis. Hoory, Magen, Myers, and Rackoff (ICALP ’04; TCS ’05) formulated an appealing conjecture: if the sequential composition of $T$ independent local randomized permutations is (close to) four-wise independent, then it should also be a pseudorandom permutation. Here, "local" means that each output bit depends on only a constant number of input bits. This conjecture offers a potential strong justification for analyses of block ciphers that establish (almost) $k$-wise independence of this type of constructions. In this work, we disprove the conjecture in full generality by presenting an explicit local randomized permutation whose sequential composition is four-wise independent, but not a pseudorandom permutation. Our counterexample in fact extends to $k$-wise independence for any constant $k$.
Last updated:  2025-12-18
$k$-out-of-$n$ Proofs and Application to Privacy-Preserving Cryptocurrencies
Min Zhang, Yu Chen, and Xiyuan Fu
Cryptocurrencies enable transactions among mutually distrustful users. While UTXO-based cryptocurrencies offer mature solutions achieving strong privacy and supporting multi-receiver transfers, account-based cryptocurrencies currently lack practical solutions that simultaneously guarantee these properties. To close this gap, we propose a generic framework for account-based cryptocurrencies that attains strong privacy and supports multi-receiver transfers, and then give a practical instantiation called \textit{Anonymous PGC}. Our system also outperforms in efficiency: for a 64-sized anonymity set and 8 receivers, Anonymous PGC achieves 2.4$\times$ faster transaction generation, 5.7$\times$ faster verification, and 2.2$\times$ reduction in transaction size compared to state-of-the-art Anonymous Zether (IEEE S\&P 2021), which offers only weak privacy and no multi-receiver support. At the core of Anonymous PGC are two novel zero-knowledge proofs of partial knowledge. First, we generalize the Groth-Kohlweiss (GK) $1$-out-of-$n$ proof (EUROCRYPT 2015) to the $k$-out-of-$n$ case, resolving an open problem regarding its generalization. Particularly, the obtained proof lends itself to seamlessly solder with range proofs, yielding an efficient $k$-out-of-$n$ range proof that demonstrates $k$ witnesses among $n$ instances lie in specific ranges. Second, we extend the Attema-Cramer-Fehr (ACF) $k$-out-of-$n$ proof (CRYPTO 2021) to support distinct group homomorphisms, boosting its expressiveness while slashing both prover and verifier complexities from quadratic to linear. We believe these proofs are of independent interest in broader privacy-preserving applications.
Last updated:  2025-12-18
SNARK Lower Bounds via Communication Complexity
Rishabh Bhadauria, Alexander R. Block, Prantar Ghosh, and Justin Thaler
We initiate the study of lower bounding the verification time of Succinct Non-interactive ARguments of Knowledge (SNARKs) built in the Polynomial Interactive Oracle Proof + Polynomial Commitment Scheme paradigm. The verification time of these SNARKs is generally dominated by the polynomial commitment scheme, and so we want to understand if polynomial commitment schemes admit lower bounds on the verification time. By recognizing that polynomial commitment schemes are also often built by applying cryptography to some information-theoretic core protocol, we seek to separate this core from the cryptography in a way that meaningfully captures the verification time required by the polynomial commitment scheme verifier. We provide strong evidence that several polynomial commitment schemes have (nearly) optimal verifier times. Our evidence comes from connecting polynomial commitment schemes to certain information-theoretic protocols known as communication protocols from the field of communication complexity, a link which we believe to be of independent interest. Through this lens, we model the verifier work in the cryptographic protocols as information (i.e., number of bits) exchanged between parties in the communication protocols, allowing us to leverage lower bounds from communication complexity. These lower bounds give strong evidence that the verifier time in these polynomial commitment schemes must be at least the number of bits exchanged in the communication protocol. We extract the communication protocol cores of three polynomial commitment schemes and lower bound the bits exchanged in these cores. The lower bounds we obtain match (up to poly-logarithmic factors) the best-known (asymptotic) verification times of the polynomial commitment schemes we examine in this work. Specifically, we show that for univariate/multilinear polynomials of size $N=2^n$: - the communication core of Hyrax PCS (Wahby et al., S&P 2016) requires $\Omega(\sqrt{N})$ bits to be exchanged; - the communication core of Bulletproofs PCS (Bootle et al., EUROCRYPT 2016; Bünz et al., S&P 2018) requires $\Omega(N)$ bits to be exchanged; and - the communication core of Dory PCS (Lee, TCC 2021) requires $\Omega(\log(N))$ bits to be exchanged. Our results strongly suggest a negative answer to a longstanding open question on whether the Bulletproofs verifier can be made sublinear time.
Last updated:  2025-12-18
UFOs: An Ultra-fast Toolkit for Multiparty Computation of Small Elements
Jiacheng Gao, Moyang Xie, Yuan Zhang, and Sheng Zhong
In most secure multiparty computation (MPC) scenarios, the data to be processed are much smaller than the underlying field size. The field is typically chosen to be large enough to guarantee security, e.g., a 128-bit prime field for 128-bit security, while the data can be as small as several bits, e.g. $4$ bits for a $16$-category classification task. This size gap can result in significant waste of communication and computation in existing MPC protocols, which often treat data of different ranges indiscriminately. We introduce UFO$_\mathrm{s}$, an ultra-fast toolkit for multiparty computation (MPC) on small elements. UFO$_\mathrm{s}$ provides highly optimized protocols for three fundamental tasks: one-hot encoding, comparison and digit decomposition. While these protocols are designed specifically for small elements, as a demonstration of their power, we construct a radix sort protocol that sorts large field elements. Our experiments show significant performance improvements over state-of-the-art MPC implementations. In particular, our sorting protocol achieves up to a $58\times$ speedup in the online phase when sorting $2^{16}$ elements among $5$ parties.
Last updated:  2025-12-18
Return of the Kummer: a Toolbox for Genus-2 Cryptography
Maria Corte-Real Santos and Krijn Reijnders
This work expands the machinery we have for isogeny-based cryptography in genus 2 by developing a toolbox of several essential algorithms for Kummer surfaces, the dimension-2 analogue of $x$-only arithmetic on elliptic curves. Kummer surfaces have been suggested in hyper-elliptic curve cryptography since at least the 1980s and recently these surfaces have reappeared to efficiently compute $(2,2)$-isogenies. We construct several essential analogues of techniques used in one-dimensional isogeny-based cryptography, such as pairings, deterministic point sampling and point compression and give an overview of $(2,2)$-isogenies on Kummer surfaces. We furthermore show how Scholten's construction can be used to transform isogeny-based cryptography over elliptic curves over $\mathbb{F}_{p^2}$ into protocols over Kummer surfaces over $\mathbb{F}_{p}$ As an example of this approach, we demonstrate that SQIsign verification can be performed completely on Kummer surfaces, and, therefore, that one-dimensional SQIsign verification can be viewed as a two-dimensional isogeny between products of elliptic curves, now defined over $\mathbb{F}_p$ rather than $\mathbb{F}_{p^2}$.
Last updated:  2025-12-18
Dimension-Reducing Algorithms for Quaternion Ideal-SVP
Cong Ling, Andrew Mendelsohn, and Christian Porter
We study the approximate Hermite Shortest Vector Problem (HSVP) in ideal lattices in orders of quaternion algebras. For one- and two-sided ideals respectively, we show that for almost all ideals we may solve HSVP in a sublattice of dimension at most one half (respectively, one quarter) of the original lattice dimension, with only small losses in the approximation factor. For two-sided ideals in a cryptographically-relevant family of maximal orders, we obtain approximation factors independent of the algebraic norm of the ideal. For one-sided ideals, we obtain a similar result for a large and natural family of ideal lattices. Finally, we turn our mathematical results into algorithms, and give an unconditional quantum polynomial time algorithm to solve HSVP in ideals of maximal orders of quaternion algebras, given an oracle for HSVP in ideals of maximal orders of number fields, in lower dimension.
Last updated:  2025-12-18
SLIDE: Shuffle Shamir Secret Shares Uniformly with Linear Online Communication and Guaranteed Output Delivery
Jiacheng Gao, Moyang Xie, Yuan Zhang, and Sheng Zhong
We revisit shuffle protocols for Shamir secret sharing. Existing constructions either produce a non-uniform shuffle or incur high communication and round complexity, in some cases exponential in the number of parties. We propose two new shuffle protocols that achieve uniform shuffle with communication complexity $O\!\left(\frac{k + l}{\log k} \, n^2 m \log m\right)$ for shuffling rows of an $m\times l$ matrix shared among $n$ parties, where $k\leq m$ is a tunable parameter that balances communication and computation. The first protocol is concretely more efficient, while the second achieves the best-known $O(nml)$ online communication and $O(n)$ rounds. Both constructions enjoy an ideal property of guaranteed output delivery (GOD). In terms of both overall and online communication, our protocols improve upon the state of the art for shuffle protocols based on Shamir secret sharing. Our key technical ingredient is a novel permutation sharing technique that represents a permutation by smaller permutation matrices. Once permutations are shared, applying them becomes significantly cheaper, enabling a highly efficient online phase. The first protocol applies independent secret permutations from all parties in sequence, while the second protocol builds on the shuffle correlation technique of Gao et al., achieving $O(nml)$ online complexity. We further extend shuffle correlation to support GOD with linear online communication, an extension has not been previously explored, As a result, we obtain SLIDE, the first shuffle protocol achieving both $O(nml)$ online communication and guaranteed output delivery. Our constructions rely only on the most basic primitives of Shamir secret sharing over any field $\mathbb{F}$ with $\lvert\mathbb{F}\rvert > n$. Since shuffle is a fundamental building block for higher-level MPC primitives such as sorting and oblivious data structures, our results are broadly applicable and can accelerate many real-world applications of secure multiparty computation.
Last updated:  2025-12-18
Security Models and Cryptographic Protocols in a Quantum World
Céline Chevalier, Paul Hermouet, and Quoc-Huy Vu
The emergence of quantum computing has provided new paradigms for cryptography. On the one hand, it poses significant new threats to existing classically cryptographic systems, requiring the community to define new security models that capture what a quantum adversary can do. On the other hand, it gives us new tools to design cryptographic protocols, with weaker assumptions than in the classical world, or even protocols that are impossible classically. In this survey, we first give an overview of new security definitions for classical cryptography, considering quantum adversaries who can either only use local quantum computation (post-quantum security), or even send quantum messages and in particular have access to oracle in superposition (quantum security). We explore these new notions through the examples of commitments, zero-knowledge proofs, encryption, and signatures. Then, we present what is arguably the most famous application of quantum cryptography: quantum key distribution (QKD) protocols that take advantage of unique properties of quantum mechanics to provide secure communication unconditionally. We also explore cryptography beyond QKD, focusing on unclonable cryptography: a family of cryptographic functionalities, built with quantum states, and designed to be resistant to counterfeit by leveraging the “no-cloning” theorem. We examine in particular quantum money, but also the recent notions of unclonable encryption and copy-protection, including related variants. By presenting a comprehensive survey of these topics, this paper aims to provide a thorough understanding of the current landscape and future potential of quantum cryptography.
Last updated:  2025-12-18
BOIL: Proof-Carrying Data from Accumulation of Correlated Holographic IOPs
Tohru Kohrita, Maksim Nikolaev, and Javier Silva
In this paper, we present a batching technique for oracles corresponding to codewords of a Reed–Solomon code. This protocol is inspired by the round function of the STIR protocol (CRYPTO 2024). Using this oracle batching protocol, we propose a construction of a practically efficient accumulation scheme, which we call BOIL. Our accumulation scheme can be initiated with an arbitrary correlated holographic IOP, leading to a new class of PCD constructions. The results of this paper were originally given as a presentation at zkSummit12.
Last updated:  2025-12-18
On the representation of self-orthogonal codes and applications to cryptography
Marco Baldi, Rahmi El Mechri, Paolo Santini, and Riccardo Schiavoni
The \textit{hull} of a linear code is the intersection between the code and its dual. When the hull is equal to the code (i.e., the code is contained in the dual), the code is called \textit{self-orthogonal} (or \textit{weakly self-dual}); if, moreover, the code is equal to its dual, then we speak of a \textit{self-dual} code. For problems such as the Permutation Equivalence Problem (PEP) and (special instances of) the Lattice Isomorphism Problem (LIP) over $q$-ary lattices, codes with a sufficiently large hull provide hard-to-solve instances. In this paper we describe a technique to compress the representation of a self-orthogonal code. Namely, we propose an efficient compression (and decompression) technique that allows representing the generator matrix of a self-orthogonal code with slightly more than $k(n-k)-\binom{k+1}{2}$ finite field elements. The rationale consists in exploiting the relationships deriving from self-orthogonality to reconstruct part of the generator matrix entries from the others, thus reducing the amount of entries one needs to uniquely represent the code. For instance, for self-dual codes, this almost halves the amount of finite field elements required to represent the code. We first present a basic version of our algorithm and show that it runs in polynomial time and, moreover, its communication cost asymptotically approaches the lower bound set by Shannon's source coding theorem. Then, we provide an improved version which reduces both the size of the representation and the time complexity, essentially making the representation technique as costly as Gaussian elimination. %Under plausible heuristic assumptions (which we have verified via intensive numerical simulations), we furthermore show that our encoding technique is asymptotically optimal (in the sense that the amount of required information bits matches with the theoretical lower bound from Shannon's theorem). As concrete applications, we show that our technique can be used to reduce the public key size in cryptosystems based on PEP such as LESS and SPECK (achieving approximately a 50% reduction in the public key size), as well as in the Updatable Public Key Encryption Scheme recently proposed by Albrecht, Benčina and Lai, which is based on LIP.
Last updated:  2025-12-18
Secure Distributed State Management for Stateful Signatures with a Practical and Universally Composable Protocol
Johannes Blömer, Henrik Bröcher, Volker Krummel, and Laurens Porzenheim
Stateful signatures like the NIST standardized signature schemes LMS and XMSS provide an efficient and mature realization of post-quantum secure signature schemes. They are recommended for long-term use cases like e.g. firmware signing. However, stateful signature schemes require to properly manage a so-called state. In stateful signature schemes like LMS and XMSS, signing keys consist of a set of keys of a one-time signature scheme and it has to be guaranteed that each one-time key is used only once. This is done by updating a state in each signature computation, basically recording which one-time keys have already been used. While this is straightforward in centralized systems, in distributed systems like secure enclaves consisting of e.g. multiple hardware security modules (HSMs) with limited communication keeping a distributed state that at any point in time is consistent among all parties involved presents a challenge. This challenge is not addressed by the current standardization processes. In this paper we present a security model for the distributed key management of post-quantum secure stateful signatures like XMSS and LMS. We also present a simple, efficient, and easy to implement protocol proven secure in this security model, i.e. the protocol guarantees at any point in time a consistent state among the parties in a distributed system, like a distributed security enclave. The security model is defined in the universal composabilty (UC) framework by Ran Canetti by providing an ideal functionality for the distributed key management for stateful signatures. Hence our protocol remains secure even if arbitrarily composed with other instances of the same or other protocols, a necessity for the security of distributed key management protocols. Our main application are security enclaves consisting of HSMs, but the model and the protocol can easily be adapted to other scenarios of distributed key management of stateful signature schemes.
Last updated:  2025-12-18
Anamorphic Monero Transactions: the Threat of Bypassing Anti-Money Laundering Laws
Adrian Cinal, Przemysław Kubiak, Mirosław Kutyłowski, and Gabriel Wechta
In this paper, we analyze the clash between privacy-oriented cryptocurrencies and emerging legal frameworks for combating financial crime, focusing in particular on the recent European Union regulations. We analyze Monero, a leading "privacy coin" and a major point of concern for law enforcement, and study the scope of due diligence that must be exercised under the new law with regard to Monero trading platforms and how it translates to the technical capabilities of the Monero protocol. We both recognize flaws in the legislation and identify technical pitfalls threatening either the effective compliance of, say, Monero exchanges or the anonymity endeavour of Monero itself. Of independent interest is that we turn to anamorphic cryptography (marking one of the first practical applications of the concept) and leverage it to build a hidden transaction layer embedded in the Monero blockchain that obfuscates illegal money flow and circumvents transaction-level attempts at enforcing the EU law.
Last updated:  2025-12-18
Quantum Resource Analysis of Low-Round Keccak/SHA-3 Preimage Attack: From Classical 2^ 57.8 to Quantum 2 ^28.9 using Qiskit Modeling
Uncategorized
Ramin Rezvani Gilkolaei and Reza Ebrahimi
Show abstract
Uncategorized
This paper presents a hardware-conscious analysis of the quantum acceleration of the classical 3-round Keccak-256 preimage attack using Grover’s Algorithm. While the theo- retical quantum speed-up from T cl ≈ 2 ^57.8 (classical) to T qu ≈ 2 ^28.9 (quantum) is mathe- matically sound, the practical implementation overhead is so extreme that attacks remain wholly infeasible in both resource and runtime dimensions. Using Qiskit-based circuit synthesis, we derive that a 3-round Keccak quantum oracle requires: • 9,600 Toffoli gates (with uncomputation for reversibility) • 3,200 logical qubits (1,600 state + 1,600 auxiliary) • 7.47 × 10 13 total 2-qubit gates (full Grover search) • 3.2 million physical qubits (with quantum error correction) — PROHIBITIVE • 0.12 years (43 days) to 2,365+ years execution time, depending on machine assumptions These barriers—particularly the physical qubit requirements, circuit depth, and error accumulation—render the quantum attack infeasible for any foreseeable quantum computer. Consequently, SHA-3 security is not threatened by quantum computers for preimage attacks. We emphasize the critical importance of hardware-aware complexity analysis in quantum cryptanalysis: the elegant asymptotic theory of Grover’s Algorithm hides an engineering overhead so prohibitive that the quantum approach becomes infeasible from both resource and implementation perspectives.
Last updated:  2025-12-18
E2E-AKMA: An End-to-End Secure and Privacy-Enhancing AKMA Protocol Against the Anchor Function Compromise
Yueming Li, Long Chen, Qianwen Gao, and Zhenfeng Zhang
The Authentication and Key Management for Applications (AKMA) system represents a recently developed protocol established by 3GPP, which is anticipated to become a pivotal component of the 5G standards. AKMA enables application service providers to delegate user authentication processes to mobile network operators, thereby eliminating the need for these providers to store and manage authentication-related data themselves. This delegation enhances the efficiency of authentication procedures but simultaneously introduces certain security and privacy challenges that warrant thorough analysis and mitigation. The 5G AKMA service is facilitated by the AKMA Anchor Function (AAnF), which may operate outside the boundaries of the 5G core network. A compromise of the AAnF could potentially allow malicious actors to exploit vulnerabilities, enabling them to monitor user login activities or gain unauthorized access to sensitive communication content. Furthermore, the exposure of the Subscription Permanent Identifier (SUPI) to external Application Functions poses substantial privacy risks, as the SUPI could be utilized to correlate a user's real-world identity with their online activities, thereby undermining user privacy. To mitigate these vulnerabilities, we propose a novel protocol named E2E-AKMA, which facilitates the establishment of a session key between the User Equipment (UE) and the Application Function (AF) with end-to-end security, even in scenarios where the AAnF has been compromised. Furthermore, the protocol ensures that no entity, aside from the 5G core network, can link account activities to the user's actual identity. This architecture preserves the advantages of the existing AKMA scheme, such as eliminating the need for complex dynamic secret data management and avoiding reliance on specialized hardware (apart from standard SIM cards). Experimental evaluations reveal that the E2E-AKMA framework incurs an overhead of approximately 9.4\% in comparison to the original 5G AKMA scheme, which indicates its potential efficiency and practicality for deployment.
Last updated:  2025-12-18
Random-Access AEAD for Fast Lightweight Online Encryption
Andrés Fábrega, Julia Len, Thomas Ristenpart, and Gregory Rubin
We study the problem of random-access authenticated encryption. In this setting, one wishes to encrypt (resp., decrypt) a large payload in an online matter, i.e., using a limited amount of memory, while allowing for the processing of plaintext (resp., ciphertext) segments to be in a random order. Prior work has studied online AE for in-order (streaming) encryption and decryption, and later work added additional constraints to support random access decryption. The result is complicated notions that are not built from the start to account for random access. We thus provide a new, clean-state treatment to the random-access setting. We introduce random-access authenticated encryption (raAE) schemes, which captures AEAD that provides random-access encryption and decryption. We introduce formal security definitions for raAE schemes that cover confidentiality, integrity, and commitment. We prove relationships with existing notions, showing that our simpler treatment does not sacrifice achievable security. Our implications also result in the first treatment of commitment security for online AEAD as well, an increasingly important security goal for AEAD. We then exercise our formalization with a practice-motivated case study: FIPS-compliant raAE. We introduce an raAE scheme called FLOE (Fast Lightweight Online Encryption) that is FIPS compliant, compatible with existing AES-GCM APIs that mandate random nonces, and yet can provide secure, random-access, committing encryption of orders of magnitude more data than naive approaches that utilize AES-GCM. FLOE was designed in close collaboration with leading cloud data platform Snowflake, where it will soon be used in production to protect sensitive data.
Last updated:  2025-12-18
Post-Quantum Security of the Sum of Even-Mansour
YanJin Tan, JunTao Gao, and XueLian Li
The Sum of Even-Mansour (SoEM) construction was proposed by Chen et al. at Crypto 2019. This construction implements a pseudorandom permutation via the modular addition of two independent Even-Mansour structures and can spawn multiple variants by altering the number of permutations or keys. It has become the design basis for some symmetric schemes, such as the nonce-based encryption scheme CENCPP* and the nonce-based message authentication code scheme nEHTm. This paper provides a proof of the quantum security of the SoEM21 construction under the Q1 model: when an attacker has quantum access to the random permutations but only classical access to the keyed construction, the SoEM21 construction ensures security of up to \(n/3\) bits. This exactly matches the complexity \(O(2^{n/3})\) of the quantum key recovery attack in the Q1 model recently proposed by Li et al., thus establishing a tight bound.
Last updated:  2025-12-18
Benchmarking SLH-DSA: A Comparative Hardware Analysis Against Classical Digital Signatures for Post-Quantum Security
Jayalaxmi H, H M Brunda, Sumith Subraya Nayak, Sathya M, and Anirudh S Hegde
The advent of large-scale quantum computers poses a fundamental threat to widely deployed public-key cryptographic schemes such as RSA and elliptic curve digital signatures. In response, the National Institute of Standards and Technology has standardized several post-quantum cryptographic algorithms, including the Stateless Hash-Based Digital Signature Algorithm (SLH-DSA) specified in FIPS 205. While SLH-DSA offers strong, conservative security guarantees based solely on cryptographic hash functions, its practical adoption depends on a clear understanding of its hardware cost and performance characteristics relative to classical standards. This paper presents a unified hardware benchmarking study of SLH-DSA against RSA, DSA, ECDSA, and EdDSA. All algorithms are implemented at the register-transfer level in Verilog HDL and synthesized on the same Xilinx Artix-7 FPGA platform to ensure a fair comparison. The evaluation focuses on key hardware metrics, including logic utilization, memory usage, DSP consumption, operational latency, maximum clock frequency, and throughput for key generation, signing, and verification. The results demonstrate that SLH-DSA is logic- and memory-intensive, with significantly higher signing latency and larger signature sizes compared to classical schemes. However, its verification performance is highly competitive, and its public key size remains extremely small. In contrast, classical schemes are primarily arithmetic-bound and rely heavily on DSP resources. The findings highlight that SLH-DSA represents a viable post-quantum solution for applications prioritizing long-term security assurance and efficient verification, such as firmware authentication and digital archiving, despite its higher signing cost.
Last updated:  2025-12-18
High Exponents May Not Suffice to Patch AIM (On Attacks, Weak Parameters, and Patches for AIM2)
Yimeng Sun, Shiyao Chen, Guowei Liu, Meiqin Wang, and Chao Niu
The growth of advanced cryptographic applications has driven the development of arithmetization-oriented (AO) ciphers over large finite fields, which are designed to minimize multiplicative complexity. However, this design advantage of AO ciphers could also serve as an attack vector. For instance, the \textsf{AIM} one-way function in the post-quantum signature \AIMer proposed at CCS 2023 has been broken by several works soon after its publication. The designers then promptly developed secure patches and proposed an enhanced version, \textsf{AIM2}, which was updated to the latest version of \AIMer that has been selected as one of the winners of the Korean PQC Competition in early 2025. In this paper, we focus on the algebraic security of AIM2 over $\mathbb{F}_{2^n}$. First, we introduce a resultant-minimized model that reduces eliminations by using a non-$k$ based substitution strategy and linearized-polynomial decomposition, achieving an attack time complexity of $2^{188.76}$ ($2^{195.05}$) primitive calls of \textsf{AIM2-III} when $\omega=2$ ($\omega=2.373$), indicating that the designers have been over-optimistic in the evaluation of their security margin; Second, we propose a subfield reduction technique for the case that exponents approach subfield extension sizes, equation degrees collapse sharply, \textit{e.g.,} the exponent $e_2=141\mapsto 13$ in \textsf{AIM2-V} when considering the subfield $\mathbb{F}_{2^{128}}$. This can lower the algebraic attack complexity to $2^{295.97}$ primitive calls at $\omega=2$, which improves upon designers' estimated bound of Gr\"{o}bner basis attack by about $2^{100}$. Besides, based on our attack methods, we have identified some weak parameter choices, which could provide concrete design guidance for \textsf{AIM2} construction, especially for the exponent of its Mersenne S-box. Finally, to address the potential vulnerabilities, we further propose \textsf{AIM2-patch} with a simple secure patch on \textsf{AIM2}, which can prevent key elimination, neutralize linearized-polynomial decomposition, and raise algebraic attack complexity, while incurring negligible overheads in \AIMer scheme.
Last updated:  2025-12-18
An Efficient Private GPT Never Autoregressively Decodes
Zhengyi Li, Yue Guan, Kang Yang, Yu Feng, Ning Liu, Yu Yu, Jingwen Leng, and Minyi Guo
The wide deployment of the generative pre-trained transformer (GPT) has raised privacy concerns for both clients and servers. While cryptographic primitives can be employed for secure GPT inference to protect the privacy of both parties, they introduce considerable performance overhead. To accelerate secure inference, this study proposes a public decoding and secure verification approach that utilizes public GPT models, motivated by the observation that securely decoding one and multiple tokens takes a similar latency. The client uses the public model to generate a set of tokens, which are then securely verified by the private model for acceptance. The efficiency of our approach depends on the acceptance ratio of tokens proposed by the public model, which we improve from two aspects: (1) a private sampling protocol optimized for cryptographic primitives and (2) model alignment using knowledge distillation. Our approach improves the efficiency of secure decoding while maintaining the same level of privacy and generation quality as standard secure decoding. Experiments demonstrate a $2.1\times \sim 6.0\times$ speedup compared to standard decoding across three pairs of public-private models and different network conditions.
Last updated:  2025-12-18
Nimbus: Secure and Efficient Two-Party Inference for Transformers
Zhengyi Li, Kang Yang, Jin Tan, Wen-jie Lu, Haoqi Wu, Xiao Wang, Yu Yu, Derun Zhao, Yancheng Zheng, Minyi Guo, and Jingwen Leng
Transformer models have gained significant attention due to their power in machine learning tasks. Their extensive deployment has raised concerns about the potential leakage of sensitive information during inference. However, when being applied to Transformers, existing approaches based on secure two-party computation (2PC) bring about efficiency limitations in two folds: (1) resource-intensive matrix multiplications in linear layers, and (2) complex non-linear activation functions like $\mathsf{GELU}$ and $\mathsf{Softmax}$. This work presents a new two-party inference framework $\mathsf{Nimbus}$ for Transformer models. For the linear layer, we propose a new 2PC paradigm along with an encoding approach to securely compute matrix multiplications based on an outer-product insight, which achieves $2.9\times \sim 12.5\times$ performance improvements compared to the state-of-the-art (SOTA) protocol. For the non-linear layer, through a new observation of utilizing the input distribution, we propose an approach of low-degree polynomial approximation for $\mathsf{GELU}$ and $\mathsf{Softmax}$, which improves the performance of the SOTA polynomial approximation by $2.9\times \sim 4.0\times$, where the average accuracy loss of our approach is 0.08\% compared to the non-2PC inference without privacy. Compared with the SOTA two-party inference, $\mathsf{Nimbus}$ improves the end-to-end performance of BERT inference by $2.7\times \sim 4.7\times$ across different network settings.
Last updated:  2025-12-18
ARION: Attention-Optimized Transformer Inference on Encrypted Data
Linhan Yang, Jingwei Chen, Wangchen Dai, Shuai Wang, Wenyuan Wu, and Yong Feng
Privacy-preserving Transformer inference (PPTI) is essential for deploying large language models (LLMs) such as BERT and LLaMA in sensitive domains. In these models, the attention mechanism is both the main source of expressiveness and the dominant performance bottleneck under fully homomorphic encryption (FHE), due to large ciphertext matrix multiplications and the softmax nonlinearity. This paper presents Arion, a non-interactive FHE-based PPTI protocol that specifically optimizes the computation of encrypted attention. First, for the three consecutive ciphertext matrix multiplications in multi-head attention, we introduce the double Baby-Step Giant-Step algorithm, which significantly reduces the number of ciphertext rotations. On BERT-Base, Arion achieves an 82.5% reduction in rotations over the state-of-the-art PPTI protocol MOAI (2025), corresponding to a 5.7x reduction in rotation cost. Second, we propose a linear–nonlinear fusion technique tailored to the softmax evaluation in attention. By decomposing softmax into shift-by-maximum, exponentiation, and reciprocal sub-steps and fusing them with the surrounding encrypted matrix operations, Arion enables efficient attention evaluation while remaining compatible with diverse ciphertext packing formats. We implement Arion using Lattigo and first evaluate attention kernels on popular LLMs including BERT-Tiny, BERT-Base, and LLaMA, confirming the practicality and scalability of the proposed optimizations for encrypted attention computation. For end-to-end applications, on classification tasks for several benchmark datasets, Arion attains accuracy comparable to plaintext inference and yields up to 2.5x end-to-end speedups over MOAI for BERT-Base.
Last updated:  2025-12-18
HHGS: Forward-secure Dynamic Group Signatures from Symmetric Primitives
Xuelian Cao, Zheng Yang, Daniel Reijsbergen, Jianting Ning, Junming Ke, Zhiqiang Ma, and Jianying Zhou
Group signatures allow a group member to sign messages on behalf of the group while preserving the signer’s anonymity, making them invaluable for privacy-sensitive applications. As quantum computing advances, post-quantum security in group signatures becomes essential. Symmetric primitives (SP) offer a promising pathway due to their simplicity, efficiency, and well-understood security foundations. In this paper, we introduce the first \textit{forward-secure dynamic group signature} (FSDGS) framework relying solely on SP. We begin with \textit{hierarchical hypertree group signatures} (HHGS), a basic scheme that securely organizes keys of one-time signatures (OTS) in a hypertree using puncturable pseudorandom functions to enable on-demand key generation and forward security, dynamic enrollment, and which provides resilience against attacks that exploit registration patterns by obfuscating the assignment and usage of keys. We then extend this foundation to HHGS^+, which orchestrates multiple HHGS instances in a generic way, significantly extending the total signing capacity to $O(2^{60})$, which outperforms HHGS's closest competitors while keeping signatures below 8 kilobytes. We prove the security of both schemes in the standard model. Our results outline a practical SP-driven pathway toward post-quantum-secure group signatures suitable for resource-constrained client devices.
Last updated:  2025-12-17
Accelerating FrodoKEM in Hardware
Sanjay Deshpande, Patrick Longa, and Jakub Szefer
FrodoKEM, a conservative post-quantum key encapsulation mechanism based on the plain Learning with Errors (LWE) problem, has been recommended for use by several government cybersecurity agencies and is currently undergoing standardization by the International Organization for Standardization (ISO). Despite its robust security guarantees, FrodoKEM's performance remains one of the main challenges to its widespread adoption. This work addresses this concern by presenting a fully standard-compliant, high-performance hardware implementation of FrodoKEM targeting both FPGA and ASIC platforms. The design introduces a scalable parallelization architecture that supports run-time configurability across all twelve parameter sets, covering three security levels (L1, L3, L5), two PRNG variants (SHAKE-based and AES-based), and both standard and ephemeral modes, alongside synthesis-time tunability through a configurable performance parameter to balance throughput and resource utilization. For security level L1 on Xilinx Ultrascale+ FPGA, the implementation achieves 3,164, 2,846, and 2,614 operations per second for key generation, encapsulation, and decapsulation, respectively, representing the fastest standard-compliant performance reported to date while consuming only 27.8K LUTs, 64 DSPs, and 8.1K flip-flops. These results significantly outperform all prior specification-compliant implementations and even surpass non-compliant designs that sacrifice specification adherence for speed. Furthermore, we present the first ASIC evaluation of FrodoKEM using the NANGATE45 45 nm technology library, achieving 7,194, 6,471, and 5,943 operations per second for key generation, encapsulation, and decapsulation, respectively, with logic area of 0.235 mm$^2$. The ASIC implementation exhibits favorable sub-linear area scaling and competitive energy efficiency across different performance parameter configurations, establishing a baseline for future comparative studies. The results validate FrodoKEM's practical viability for deployment in high-throughput, resource-constrained, and power-sensitive cryptographic applications, demonstrating that conservative post-quantum security can be achieved without compromising performance.
Last updated:  2025-12-17
On the Pitfalls of Modeling Individual Knowledge
Wojciech Ciszewski, Stefan Dziembowski, Tomasz Lizurej, and Marcin Mielniczuk
The concept of knowledge has been central in cryptography, especially within cryptographic proof systems. Traditionally, research in this area considers an abstract \emph{prover} defending a claim that it knows a message $M$. Recently, a stronger concept—termed ``individual'' (Dziembowski et al., CRYPTO'23) or ``complete'' (Kelkar et al., CCS'24) knowledge—has emerged. This notion ensures the prover physically stores $M$ on a machine that it controls. As we argue in the paper, this concept also appears in earlier work on ``non-outsourceable puzzles'' (Miller et al., CCS'15), which implicitly assumes that performing quickly complex computation on a string $M$ implies storing it on a single machine. In this line of work, the authors typically rely on the algorithms whose computation requires a massive number of queries to a hash function $H$. This paper highlights a subtle issue in the modeling used in some of these papers, more concretely, the assumption that H can be modeled as an atomic random oracle on long messages. Unfortunately, this does not correspond well to how the hash functions are constructed in practice. For example, the real-world hash functions (e.g., Merkle-Damgard or sponge-based) allow partial evaluation on long inputs, violating this assumption. Another example is the hashing used in Bitcoin mining, which permits similar precomputation. This undermines some protocols relying on individual knowledge. We demonstrate practical attacks against Miller et al.'s and Kelkar et al.'s schemes based on this observation, and discuss secure alternatives. Our alternative constructions, which are modifications of the original ones, avoid reliance on the random oracle behavior of hash functions on long messages. In the full version of this paper, we will provide their formal security analysis in the individual cryptography model of Dziembowski et al. (CRYPTO'23).
Last updated:  2025-12-17
How to Compare Bandwidth Constrained Two-Party Secure Messaging Protocols: A Quest for A More Efficient and Secure Post-Quantum Protocol
Benedikt Auerbach, Yevgeniy Dodis, Daniel Jost, Shuichi Katsumata, and Rolfe Schmidt
Transitioning existing classical two-party secure messaging protocols to post-quantum protocols has been an active movement in practice in recent years: Apple’s PQ3 protocol and the recent Triple Ratchet protocol being investigated by the Signal team with academics (Dodis et al. Eurocrypt’25). However, due to the large communication overhead of post-quantum primitives, numerous design choices non-existent in the classical setting are being explored, rendering comparison of secure messaging protocols difficult, if not impossible. In this work, we thus propose a new pragmatic metric to measure how secure a messaging protocol is given a particular communication pattern, enabling a concrete methodology to compare secure messaging protocols. We uncover that there can be no “optimal” protocol, as different protocols are often incomparable with the respect to worst-case (adversarial) messaging behaviors, especially when faced with real-world bandwidth constraints. We develop a comprehensive framework to experimentally compare various messaging protocols under given bandwidth limits and messaging behaviors. Finally, we apply our framework to compare several new and old messaging protocols. Independently, we also uncover untapped optimizations which we call opportunistic sending, leading to better post-quantum messaging protocols. To capture these optimizations, we further propose sparse continuous key agreement as a fundamental building block for secure messaging protocols, which could be of independent interest.
Last updated:  2025-12-17
Breaking UOV Encryption: Key Recovery Attack On Olivier
Uncategorized
Emanuele Cornaggia
Show abstract
Uncategorized
The Oil and Vinegar (OV) trapdoor is widely used in signature schemes such as UOV and MAYO. Recently, Esposito et al. proposed OliVier, an encryption scheme based on this trapdoor. However, the OV trapdoor was originally designed for signatures, and adapting it to encryption introduces inherent challenges. We identify two such challenges and analyze how OliVier addresses the first, while showing that the unresolved second challenge enables a practical key-recovery attack. We conclude that any scheme using the OV trapdoor for encryption must also solve this second problem, for which no efficient solution is currently known.
Last updated:  2025-12-17
Masked Circuit Compiler in the Cardinal Random Probing Composability Framework
Sonia Belaïd, Victor Normand, and Matthieu Rivain
Designing practically secure masked circuits remains a central problem in the field of cryptographic implementation. While most masking schemes have been proven secure in the classical probing model, this model fails to capture more advanced side-channel attacks such as horizontal attacks. In recent years, the community has shifted toward the more realistic random probing model, which offers stronger guarantees. Yet, balancing strong security with practical efficiency continues to be a significant challenge. In this work, we introduce new tools and constructions that significantly improve the design and analysis of random probing secure circuits. First, we formalize new security notions that combine the benefits of cardinal and general Random Probing Composability (RPC), two recently introduced notions enabling more flexible and efficient composition of secure gadgets. We then show how uniformly random permutations can be applied to transform any cardinal or general RPC gadget into a so-called uniformly cardinal RPC gadget, thereby enhancing security at low cost. Using these techniques, we propose the first non-linear multiplication gadget, inspired by the recursive construction from CHES 2016, that achieves concrete cardinal RPC security. We provide a detailed comparison with state-of-the-art multiplication gadgets in terms of both random probing advantage and implementation complexity. Building upon this gadget, we design a tighter random probing compiler that strategically uses permutations to improve security bounds while preserving efficiency. Finally, we apply our compiler to the AES and demonstrate improved performance and security compared to existing methods.
Last updated:  2025-12-17
PRGUE Schemes: Efficient Updatable Encryption With Robust Security From Symmetric Primitives
Elena Andreeva and Andreas Weninger
Securing sensitive data for long-term storage in the cloud is a challenging problem. Updatable encryption (UE) enables changing the encryption key of encrypted data in the cloud while the plaintext and all versions of the key remain secret from the cloud storage provider, making it an efficient alternative for companies that seek to outsource their data storage. The most secure UE schemes to date follow robust security models, such as the one by Boyd et al. from CRYPTO 2020, and rely exclusively on asymmetric cryptography, thus incurring a substantial performance cost. In contrast, the Nested UE construction of Boneh et al. from ASIACRYPT 2020 achieves much better efficiency with symmetric cryptography, but it provides weaker security guarantees. Boyd et al. further suggest that attaining robust UE security inherently requires the use of asymmetric cryptography. In this work, we show for the first time that symmetric UE schemes are not inherently limited in their security and can achieve guarantees on par with, and even beyond, Boyd’s UE model. To this end, we extend Boyd’s framework to encompass the class of ciphertext-dependent UE schemes and introduce indistinguishability-from-random (IND\$) as a stronger refinement of indistinguishability. While our IND\$ notion primarily streamlines the proofs of advanced security properties within the model, it yields practical privacy advantages: ciphertexts do not exhibit a recognizable structure that could otherwise distinguish them from arbitrary data. We then introduce two robustly secure symmetric UE constructions, tailored to different target security levels. Our schemes are built on a novel design paradigm that combines symmetric authenticated encryption with ciphertext re-randomization, leveraging for the first time the use of pseudorandom number generators in a one-time-pad style. This approach enables both robust security and high efficiency, including in AES-based implementations. Our first scheme, PUE-List, delivers encryption up to 600× faster than prior asymmetric schemes of similar robustness, while matching Boneh et al.’s efficiency and achieving the stronger security level of Boyd et al. Our second scheme, PUE-One, further boosts performance with constant-time decryption 24× faster than all previously known UE schemes, overcoming the main bottleneck in Boneh’s design, while trading off some security, yet still significantly surpassing the guarantees of Boneh’s Nested scheme.
Last updated:  2025-12-17
Language-Agnostic Detection of Computation-Constraint Inconsistencies in ZKP Programs via Value Inference
Arman Kolozyan, Bram Vandenbogaerde, Janwillem Swalens, Lode Hoste, Stefanos Chaliasos, and Coen De Roover
Zero-knowledge proofs (ZKPs) allow a prover to convince a verifier of a statement's truth without revealing any other information. In recent years, ZKPs have matured into a practical technology underpinning major applications. However, implementing ZKP programs remains challenging, as they operate over arithmetic circuits that encode the logic of both the prover and the verifier. Therefore, developers must not only express the computations for generating proofs, but also explicitly specify the constraints for verification. As recent studies have shown, this decoupling may lead to critical ZKP-specific vulnerabilities. Unfortunately, existing tools for detecting them are limited, as they: (1) are tightly coupled to specific ZKP languages, (2) are confined to the constraint level, preventing reasoning about the underlying computations, (3) target only a narrow class of bugs, and (4) suffer from scalability bottlenecks due to reliance on SMT solvers. To address these limitations, we propose a language-agnostic formal model, called the Domain Consistency Model (DCM), which captures the relationship between computations and constraints. Using this model, we provide a taxonomy of vulnerabilities based on computation-constraint mismatches, including novel subclasses overlooked by existing models. Next, we implement a lightweight automated bug detection tool, called CCC-Check, which is based on abstract interpretation. We evaluate CCC-Check on a dataset of 20 benchmark programs. Compared to the SoTA verification tool CIVER, our tool achieves a 100-1000$\times$ speedup, while maintaining a low false positive rate. Finally, using the DCM, we examine six widely adopted ZKP projects and uncover 15 previously unknown vulnerabilities. We reported these bugs to the projects' maintainers, 13 of which have since been patched. Of these 15 vulnerabilities, 12 could not be captured by existing models.
Last updated:  2025-12-17
Leakage-Resilient Multi-Party Computation: Protecting the Evaluator in Circuits Garbling
Uncategorized
Francesco Berti and Itamar Levi
Show abstract
Uncategorized
Garbling schemes allow two parties to compute a joint function on private inputs without revealing them. Yet, a semi-honest garbler might exploit hardware/software sidechannel leakages from the evaluator. An alarming threat with no concrete solution yet. Using the homomorphic properties of ElGamal encryption, we can prevent such leakage-based attacks.
Last updated:  2025-12-17
Completing Policy-based Anonymous Tokens: Private Bits, Public Metadata and more...
David Kretzler, Yong Li, and Codrin Ogreanu
Anonymous tokens are cryptographic protocols for restricting the access to online resources to eligible users. After proving eligibility to the token issuer, the client receives a set of tokens. Later, it can prove eligibility to a resource provider by sending one of the tokens received from the issuer. The anonymous token protocol ensures that the resource provider cannot link received tokens to their issuance, even if it colludes with the token issuer. Recently, Faut et al. (EuroS\&P’25) introduced the concept of policy-based anonymous tokens, in which an issuer provides a single pre-token to a client, who can locally derive multiple tokens according to a publicly announced policy. The major advantage of policy-based tokens is that the communication complexity of the issuance phase is constant. While the work of Faut et al. constitutes a promising step in a new direction, their protocol still lacks several desirable properties known from standard anonymous tokens -- most notably, the ability to bind a pre-token and all tokens derived from it to a private metadata bit or a publicly known metadata string. In this work, we present a new framework for policy-based anonymous token schemes in the random oracle model. Our framework includes two concretely practical constructions -- one based on equivalence class signatures and one on algebraic MACs -- as well as a communication-optimized, though less practical, construction based on zkSNARKs. All three constructions can be configured to support private metadata bits, public metadata, or both. We formalize the notion of policy-based anonymous tokens with a private metadata bit and public metadata, and we prove security of the two primary constructions: the equivalence-class-signature-based scheme and the algebraic-MAC-based scheme. Finally, we provide an experimental evaluation and comparison of all our constructions alongside the most relevant related work. Our results demonstrate that our two primary constructions achieve significant efficiency improvements over the scheme of Faut et al., both in terms of computation communication.
Last updated:  2025-12-17
More NTRU+Sign Signatures from Cyclotomic Trinomials
Ga Hee Hong, Joo Woo, Jonghyun Kim, Minkyu Kim, Hochang Lee, and Jong Hwan Park
Recently, $\mathsf{NTRU}$+$\mathsf{Sign}$ was proposed as a new compact signature scheme, following `Fiat-Shamir with Aborts' (FSwA) framework. Its compactness is mainly based on their novel NTRU-based key structure that fits well with bimodal distributions in the FSwA framework. However, despite its compactness, $\mathsf{NTRU}$+$\mathsf{Sign}$ fails to provide a diverse set of parameters that can meet some desired security levels. This limitation stems from its reliance on a ring $\mathbb{Z}_q[x]/\langle x^n+1 \rangle$, where $n$ is restricted to powers of two, limiting the flexibility in selecting appropriate security levels. To overcome this limitation, we propose a revised version of $\mathsf{NTRU}$+$\mathsf{Sign}$ by adopting a ring $\mathbb{Z}_q[x]/\langle x^n-x^{n/2}+1\rangle$ from cyclotomic trinomials, where $n=2^{i}3^{j}$ for some positive integers $i$ and $j$. Our parameterization offers three distinct security levels: approximately $120$, $190$, and $260$ bits, while preserving the compactness in $\mathbb{Z}_q[x]/\langle x^n+1 \rangle$. We implement these re-parameterized $\mathsf{NTRU}$+$\mathsf{Sign}$ schemes, showing that the performance of $\mathsf{NTRU}$+$\mathsf{Sign}$ from cyclotomic trinomials is still comparable to previous lattice-based signature schemes such as $\mathsf{Dilithium}$ and $\mathsf{HAETAE}$.
Last updated:  2025-12-17
On the Provable Dual Attack for LWE by Modulus Switching
Hongyuan Qu and Guangwu Xu
As a theoretical cornerstone of post-quantum cryptography, the Learning With Errors (LWE) problem serves as the security foundation for standardized algorithms such as Kyber and Dilithium. Recently, a framework for provable dual attacks on LWE has been proposed by Pouly et al. in Eurocrypt 2024, addressing the limitations in effectiveness caused by existing methods' reliance on heuristic assumptions in LWE dual attacks. Their paper also poses an open problem on how to formally integrate modulus switching into this framework to reduce attack costs. The main purpose of this paper is to give a solution of this open problem by presenting an improved provable dual attack method that incorporates modulus switching and Chinese Remainder Theorem (CRT) techniques. First, we design a modulus switching mechanism that eliminates practical errors via the Poisson summation formula. By embedding the inherent noise from modulus switching into a rational lattice framework, our approach effectively preventing the risk of attack failure caused by the merging of such errors with LWE noise. Theoretical guarantees (Theorems \ref{main_theorem} and \ref{main_theorem_entire}) rigorously quantify the parameter ranges for successful attacks. Second, we introduce a CRT-based secret recovery method that aggregates partial secrets from independent sub-attacks. By leveraging the Chinese Remainder Theorem to reconstruct full secrets from congruence relations, our method adapts to arbitrary secret distributions. Furthermore, by using a tighter variant of Banaszczyk's measure inequality, we obtain a precise parameter range for the dual attack's efficacy through rigorous mathematical proof, and achieve the same complementary gap with the contradictory regime (proposed by Ducas et al.) as in Pouly et al.'s work. Experiments show $4$-$5$ bits superior performance in attack estimation compared to the original framework.
Last updated:  2025-12-16
Certified-Everlasting Quantum NIZK Proofs
Nikhil Pappu
We study non-interactive zero-knowledge proofs (NIZKs) for NP satisfying: 1) statistical soundness, 2) computational zero-knowledge and 3) certified-everlasting zero-knowledge (CE-ZK). The CE-ZK property allows a verifier of a quantum proof to revoke the proof in a way that can be checked (certified) by the prover. Conditioned on successful certification, the verifier's state can be efficiently simulated with only the statement, in a statistically indistinguishable way. Our contributions regarding these certified-everlasting NIZKs (CE-NIZKs) are as follows: - We identify a barrier to obtaining CE-NIZKs in the CRS model via generalizations of known interactive proofs that satisfy CE-ZK. - We circumvent this by constructing CE-NIZK from black-box use of NIZK for NP satisfying certain properties, along with OWFs. As a result, we obtain CE-NIZKs for NP in the CRS model, based on polynomial hardness of the learning with errors (LWE) assumption. - In addition, we observe that the aforementioned barrier does not apply to the shared EPR model. Consequently, we present a CE-NIZK for NP in this model based on any statistical binding hidden-bits generator, which can be based on LWE. The only quantum computation in this protocol involves single-qubit measurements of the shared EPR pairs.
Last updated:  2025-12-16
Towards Lightweight CKKS: On Client Cost Efficiency
Jung Hee Cheon, Minsik Kang, and Jai Hyun Park
Fully homomorphic encryption (FHE) enables clients with small devices to securely delegate their computations to powerful servers. However, to delegate these computations, a client should generate and transmit several gigabytes of FHE keys to the server. Reducing the size of FHE keys without compromising efficiency is therefore highly desirable, particularly for applications involving mobile and IoT devices. In this work, we propose two new key management systems, $\textsf{KG}^{+}$ and $\textsf{BTS}^{+}$, which reduce the client-side cost of FHE. The $\textsf{KG}^{+}$ system significantly reduces the key size without any compromise in the efficiency of homomorphic computation compared to the state-of-the-art key management system by Lee-Lee-Kim-No [Asiacrypt 2023]. The $\textsf{BTS}^{+}$ system further reduces the key size at the expense of granularity in homomorphic computation. Our systems rely on a new ring switching technique for keys that bridges FHE keys with different parameters. This technique decouples the rings of FHE keys during transmission and computation. In our approach, the client generates and sends ``transmission keys'' with size-optimal parameters, and the server generates ``evaluation keys'' with computation-optimal parameters. We evaluate the communication costs of $\textsf{KG}^{+}$ and $\textsf{BTS}^{+}$ on the tasks studied in Lee-Lee-Kim-No. The transmission key sizes for CKKS bootstrapping with ring dimension $2^{16}$ are $325$--$609$ MB for $\textsf{KG}^{+}$ and $285$ MB for $\textsf{BTS}^{+}$. These are $3.09$--$4.37\times$ and $3.51$--$9.30\times$ smaller than Lee-Lee-Kim-No, respectively. For the secure ResNet-20 inference, the key sizes for $\textsf{KG}^{+}$ and $\textsf{BTS}^{+}$ are $325$--$609$ MB and $285$ MB, respectively. These are $3.95$--$5.73\times$ and $4.53$--$12.25\times$ smaller than Lee-Lee-Kim-No.
Last updated:  2025-12-16
TSS-PV: Traceable Secret Sharing with Public Verifiability
Duc Anh Luong, Jong Hwan Park, Changmin Lee, and Hyoseung Kim
High-value custodial systems require both Public Verifiability (PVSS) to audit key distribution and Traceability (TSS) to identify insider leakage via black-box ``reconstruction boxes.'' Existing schemes achieve one property but not both, leaving practical systems exposed to either undetectable dealer misbehavior or untraceable share leakage. Combining these properties introduces the ``Provenance Paradox'': a verifiability-aware reconstruction box with access to verification predicates and public transcripts can reject dummy shares used for tracing because they have no provenance in the public transcript. We present TSS-PV, the first publicly verifiable traceable secret sharing scheme that resolves this paradox. Our key insight is to inject indistinguishable dummy shares during the sharing phase itself, ensuring they are committed to the public transcript before any reconstruction box is constructed. We formalize syntax and security under a modular adversarial model: public verifiability holds against fully malicious dealers and parties; traceability identifies leaking parties after honest distribution; and non-imputability prevents a malicious dealer from framing honest parties. Both tracing properties assume a verifiability-aware (perfect) reconstruction box. We instantiate TSS-PV over cyclic groups using Schnorr-based NIZKs and a recent generic tracing framework (CRYPTO'24). Public verification costs scale linearly in the number of parties; tracing costs are quadratic. A Curve25519 prototype on commodity hardware demonstrates practicality: for $32\text{ - }256$ parties, distribution verification completes in $14\text{ - }127$ ms, tracing in $0.24\text{ - }76$ s, and trace verification in $0.15\text{ - }25$ s.
Last updated:  2025-12-16
Tight Generic PRF Security of HMAC and NMAC
Yaobin Shen, Xiangyang Zhang, Lei Wang, and Dawu Gu
HMAC and its variant NMAC are among the most widely used methods for keying a cryptographic hash function to obtain a PRF or a MAC. Yet, even after nearly three decades of research, their generic PRF security still remains poorly understood, where the compression function of the underlying hash function is treated as a black box and accessible to the adversary. Although a series of works have exploited compression function queries to mount generic attacks, proving tight bounds on the generic PRF security of HMAC and NMAC remains a challenging open question until now. In this paper, we establish tight bounds on the generic PRF security of HMAC and NMAC. Our bounds capture the influence of the number of construction queries, the number of compression function queries, and the maximal block length of a message on their security. The proofs are carried out in the multi-user setting and the bounds hold regardless of the number of users. In addition, we present matching attacks to demonstrate that our bounds are essentially tight. Taken together, our results close a longstanding gap in the generic PRF security analysis of HMAC and NMAC.
Last updated:  2025-12-16
HQC Beyond the Standard: Ciphertext Compression and Refined DFR Analysis
Sebastian Bitzer, Jean-Christophe Deneuville, Emma Munisamy, Bharath Purtipli, Stefan Ritterhoff, and Antonia Wachter-Zeh
Hamming Quasi-Cyclic (HQC), recently selected by NIST for standardization, does not employ ciphertext compression, unlike its lattice-based counterpart Kyber. In lattice-based encryption, ciphertext compression is a standard post-processing step, typically implemented through coefficient-wise rounding. In contrast, analogous methods have not yet been explored in code-based cryptography. We address this gap by developing techniques to reduce ciphertext sizes in schemes defined over the Hamming metric, with a particular focus on HQC. To support this approach, the decryption failure rate (DFR) analysis is generalized. Specifically, we revisit the modeling of the error that must be correctable with probability $2^{-\lambda}$ to achieve $\lambda$ bits of security; previously, only tractable under an independence assumption. We propose a more accurate model of the error distribution, which takes dependencies between the coefficients into account. Confirmed by extensive simulations, the proposed model sharpens the DFR analysis and, hence, our understanding of the security of HQC. Building on this generalized framework, we present a ciphertext compression mechanism that enables a precise DFR analysis and is therefore transparent with respect to security. This is achieved by carefully designing a quantization code with a direct-product structure, aligned with HQC's error-correcting code. For the parameters proposed in the round 4 submission, our techniques reduce HQC ciphertext sizes by up to 4.7%; a proof-of-concept implementation confirms that this improvement comes without noticeable loss in efficiency. Reductions of up to 10% are achievable through a trade-off with public-key size.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.