Papers updated in last 31 days (293 results)
On the representation of self-orthogonal codes and applications to cryptography
The hull of a linear code is the intersection between the code and its dual.
When the hull is equal to the code (i.e., the code is contained in the dual), the code is called self-orthogonal (or weakly self-dual); if, moreover, the code is equal to its dual, then we speak of a self-dual code.
For problems such as the Permutation Equivalence Problem (PEP) and (special instances of) the Lattice Isomorphism Problem (LIP) over $q$-ary lattices, codes with a sufficiently large hull provide hard-to-solve instances.
In this paper we describe a technique to compress the representation of a self-orthogonal code.
Namely, we propose an efficient compression (and decompression) technique that allows representing the generator matrix of a self-orthogonal code with slightly more than $k(n-k)-\binom{k+1}{2}$ finite field elements.
The rationale consists in exploiting the relationships deriving from self-orthogonality to reconstruct part of the generator matrix entries from the others, thus reducing the amount of entries one needs to uniquely represent the code.
For instance, for self-dual codes, this almost halves the amount of finite field elements required to represent the code.
We first present a basic version of our algorithm and show that it runs in polynomial time and, moreover, its communication cost asymptotically approaches the lower bound set by Shannon's source coding theorem.
Then, we provide an improved version which reduces both the size of the representation and the time complexity, essentially making the representation technique as costly as Gaussian elimination.
As concrete applications, we show that our technique can be used to reduce the public key size in cryptosystems based on PEP such as LESS and SPECK (achieving approximately a 50% reduction in the public key size), as well as in the Updatable Public Key Encryption Scheme recently proposed by Albrecht, Benčina and Lai, which is based on LIP.
PIsignHD: A New Structure for the SQIsign Family with Flexible Applicability
In this paper, we propose a new structure for the SQIsign family: Pentagon Isogeny-based Signature in High Dimension (referred to as PIsignHD).
The new structure separates the hash of the commitment and that of the message by employing two cryptographic hash functions. This feature is desirable in reality, particularly for applications based on mobile low-power devices or for those deployed interactively over the Internet or in the cloud computing setting.
This structure can be generally applied to all SQIsign variants. In this work, we focus on the instance based on SQIsignHD.
Compared with SQIsignHD, PIsignHD has the same signature size (even smaller for some application scenarios). For the NIST-I security level, the signature size of PIsignHD can be reduced to 519 bits, while the SQIsignHD signature takes 870 bits. Additionally, PIsignHD has an efficient online signing process and enjoys much desirable application flexibility. In our experiments, the online signing process of PIsignHD runs in 4 ms.
Suwako: A Logarithmic-Depth Modular Reduction for Arbitrary Trinomials over $\mathbb{F}_{2^m}$ without Pre-computation
Modular reduction over binary extension fields $\mathbb{F}_{2^m}$ is a fundamental operation in cryptographic implementations, including GCM and Elliptic Curve Cryptography. Traditional reduction algorithms (e.g., linear LFSR-based methods) are highly sensitive to the algebraic structure of the defining polynomial. This sensitivity is especially acute for trinomials $P(x) = x^m + x^t + 1$, where cryptographic standards have historically mandated the use of ``friendly'' polynomials (with small $t$) to avoid the linear performance degradation associated with ``random'' or ``unfriendly'' parameters. In this paper, we challenge this constraint by introducing Suwako, a novel reduction algorithm. By exploiting the self-similar algebraic structure of the reduction map, Suwako transforms the reduction process from a serial iterative chain (dependent on the degree gap $\Delta = m-t$) into a logarithmic-depth binary-doubling structure. We theoretically prove that Suwako achieves $O(\log m)$ folding depth for arbitrary trinomials, regardless of the position of the middle term $t$. Furthermore, unlike window-based or Montgomery/Barrett reduction methods, Suwako requires no pre-computation, making it optimal for dynamic environments.
FRIVail: A Data Availability Scheme based on FRI Binius
Data Availability Sampling (DAS) has emerged as a key scalability technique for blockchain systems, enabling light clients to verify that block data have been fully published without downloading them in their entirety. We introduce FRIVail, a new DAS construction built on top of the FRI-Binius polynomial commitment scheme, designed for datasets composed of many independent single-row payloads that together form a block’s data blob. FRIVail exploits the intrinsic Reed–Solomon structure of FRI, wherein each commitment naturally encodes a codeword that light clients can sample directly.
Each row of the blob is assigned an independent FRI proof. These row-level proofs are then combined into a global availability certificate using one of three aggregation strategies. The first constructs a succinct zero-knowledge proof attesting to the correct verification of all row-level FRI proofs, yielding a compact ZK proof of proofs that enables succinct global verification while preserving row independence. The second is a fully post-quantum construction that recursively applies FRI-Binius to build a proof of proofs. In this setting, global verification relies on FRI proximity checks, but reconstruction of the aggregated proof polynomial is required to recover embedded row-level information. The third is a hybrid aggregation based on KZG polynomial commitments, where the aggregated polynomial admits direct algebraic openings but relies on pairing-based assumptions and a trusted setup, and is therefore not post-quantum.
In all variants, light clients verify availability via a small number of local opening checks against the header commitment, without downloading entire rows or the full blob. We formalize DAS security in this multi-row, multi-proof setting and show that FRIVail achieves sublinear verification complexity, robustness against adversarial availability equivocation at the row level, and resistance to correlated sampling attacks. FRIVail provides a modular foundation for next-generation blockchain data availability protocols, supporting zero-knowledge-based, fully post-quantum, and hybrid cryptographic deployments.
On Delegation of Verifiable Presentations from mdoc and BBS Credentials
Uncategorized
Uncategorized
The interest in verifiable credential systems has gained traction as eIDAS 2.0 Regulation has been published. This regulation instructs EU member states to provide their citizens with digital identity wallets (EUDI Wallet) that must store the credentials and enable privacy-preserving presentation of identity information to relying parties. This new digital identity system requires defining new protocols and procedures to perform tasks involving the disclosure of identity information. One of such procedures is the delegation of attestation, as is reported in the EUDI Wallet Reference Implementation Roadmap.
In this work, we address the problem of constructing secure processes for the delegation of verifiable presentations derived from both verifiable and anonymous credentials. Our goal is to enable a credential holder (the delegator) to securely delegate another party (the delegatee) to present a credential on their behalf.
We introduce the notion of a verifiable presentation delegation scheme, formalizing the core algorithms, namely delegation issuance, delegated presentation, and presentation verification, and defining the relevant security properties that such a scheme should satisfy: the correctness, the unforgeability, and, when the scheme is built on top of anonymous credentials, even the unlinkability. We present two concrete instantiations of delegation schemes: the first is built on top of mdoc verifiable credentials, the credential format currently supported by the EUDI Wallet Architecture and Reference Framework (EUDI ARF), while the second is built on top of BBS anonymous credentials. Finally, we discuss and analyze the security of our constructions in terms of the security properties we have introduced.
A New Approach to Large Party Beaver-Style MPC with Small Computational Overhead
Secure multi-party computation (MPC) enables $N$ parties to jointly evaluate any function over their private inputs while preserving confidentiality. While decades of research have produced concretely efficient protocols for small to moderate numbers of participants, scaling MPC to thousands of parties remains a central challenge. Most of the existing approaches either incur per-party costs linear in $N$, due to pairwise computations, or rely on heavy cryptographic tools such as homomorphic encryption, which introduces prohibitive overheads when evaluating Boolean circuits.
In this work, we introduce a new lightweight approach to designing semi-honest MPC protocols with per-party, per-gate computation and communication costs that are independent of $N$. Our construction leverages the Sparse Learning Parity with Noise (Sparse LPN) assumption in the random oracle model to achieve per-gate costs of $O(k^2 \cdot c(\lambda))$ computation and $O(c(\lambda))$ communication, where $k$ is the sparsity parameter for the Sparse LPN assumption and $c(\lambda)$ is an arbitrarily small super-constant in the security parameter $\lambda$. Assuming Sparse LPN remains hard for any super-constant sparsity, this yields the first semi-honest MPC protocol in the dishonest-majority setting with per-party per-gate costs bounded by an arbitrarily small super-constant overhead in $\lambda$.
Structurally, our MPC instantiates a Beaver style MPC with the required correlations generated efficiently. Departing from prior approaches that generate Beaver triples silently (Boyle et al., 2019; 2020; 2022) or using homomorphic computation (Damgård et al., 2012) for Beaver style MPC, the focus of this work rests on efficiently generating a weaker correlation. In particular, using Sparse LPN we show that if we relax the correctness requirement in generating random Beaver triples to permit a tunably small inverse-polynomial error probability, such triples can be silently generated with arbitrarily small super-constant per-party computation. We then show that such correlations can be used in an efficient online phase similar to Beaver's protocol (with a tiny super-constant factor blow-up in communication).
Zyga: Optimized Zero-Knowledge Proofs with Dynamic Public Inputs
We present Zyga, a pairing-based zero-knowledge proof system optimized for privacy-preserving DeFi
applications. Our main contribution is an enhancement of existing zkSNARK constructions that enables
dynamic public input substitution during verification while maintaining privacy of witness components
through one-sided encoding. The one-sided encoding aspect favors practical deployment constraints on
Solana and Ethereum where G2 scalar multiplications are computationally expensive. Zyga separates
private values (blinded through trusted setup) from public values (instantiated on-chain), enabling ap-
plications like private trading against current market rates without reproofing. We further introduce
two-sided encoding, an extension that removes circuit structure restrictions by adding a private B com-
mitment and a rebase mechanism, enabling arbitrary R1CS circuits with proof reuse across changing
base values. We provide rigorous security analysis under discrete logarithm and q-Strong Diffie-Hellman
assumptions, demonstrating computational soundness, zero-knowledge, and completeness. Performance
analysis shows verification requires only 3–4 pairings with constant proof size, making it practical for
blockchain deployment where transaction costs are critical.
Ciphertext-Ciphertext Matrix Multiplication: Fast for Large Matrices
Matrix multiplication of two encrypted matrices (CC-MM) is a key challenge for privacy-preserving machine learning applications. As modern machine learning models focus on scalability, fast CC-MM on large datasets is increasingly in demand.
In this work, we present a CC-MM algorithm for large matrices. The algorithm consists of plaintext matrix multiplications (PP-MM) and ciphertext matrix transpose algorithms (C-MT). We propose a fast C-MT algorithm, which is computationally inexpensive compared to PP-MM. By leveraging high-performance BLAS libraries to optimize PP-MM, we implement large-scale CC-MM with substantial performance improvements. Furthermore, we propose lightweight algorithms, significantly reducing the key size from $1\ 960$ MB to $1.57$ MB for CC-MM with comparable efficiency.
In a single-thread implementation, the C-MT algorithm takes $0.76$ seconds to transpose a $2\ 048\times 2\ 048$ encrypted matrix. The CC-MM algorithm requires $85.2$ seconds to multiply two $4\ 096\times 4\ 096$ encrypted matrices. For large matrices, our algorithm outperforms the state-of-the-art CC-MM method from Jiang-Kim-Lauter-Song [CCS'18] by a factor of over $800$.
Streaming Function Secret Sharing and Its Applications
Collecting statistics from users of software and online services is crucial to improve service quality, yet obtaining such insights while preserving individual privacy remains a challenge. Recent advances in function secret sharing (FSS) make it possible for scalable privacy-preserving measurement (PPM), which leads to ongoing standardization at the IETF. However, FSS-based solutions still face several challenges for streaming analytics, where messages are continuously sent, and secure computation tasks are repeatedly performed over incoming messages.
We introduce a new cryptographic primitive called streaming function secret sharing (SFSS), a new variant of FSS that is particularly suitable for secure computation over streaming messages. We formalize SFSS and propose concrete constructions, including SFSS for point functions, predicate functions, and feasibility results for generic functions. SFSS powers several promising applications in a simple and modular fashion, including conditional transciphering, policy-hiding aggregation, and attribute-hiding aggregation. In particular, our SFSS formalization and constructions identify security flaws and efficiency bottlenecks in existing solutions, and SFSS-powered solutions achieve the expected security goal with asymptotically and concretely better efficiency and/or enhanced functionality.
Attacking and Securing Hybrid Homomorphic Encryption Against Power Analysis
We present the first power side-channel analysis of a Hybrid Homomorphic Encryption (HHE) tailored symmetric encryption scheme. HHE combines lightweight client-side Symmetric Encryption (SE) with server-side homomorphic evaluation, enabling efficient privacy-preserving computation for the client and minimizing the communication overhead. Recent integer-based HHE designs such as PASTA, MASTA, HERA, and Rubato rely on prime-field arithmetic, but their side-channel security has
not been studied. This gap is critical, as modular arithmetic and large key spaces in integer-based schemes introduce new leakage vectors distinct from those in conventional Boolean symmetric ciphers. In this work, we close this gap by presenting the first power side-channel analysis of an HHE-tailored scheme - HERA.
Our results demonstrate a successful key recovery from as few as 40 power traces using Correlation Power Analysis. In addition to showing that such attacks are feasible, we develop the first masking framework for integer-based SE schemes to mitigate them. Our design integrates PINI-secure gadgets with assembly-level countermeasures to address transition leakage, and we validate its effectiveness using the Test Vector Leakage
Assessment. Our experiments confirm both the practicality of the attack and the strength of the proposed countermeasures. We also demonstrate that the framework extends to other integer-based HHE schemes, by applying our technique to PASTA. Thus, we provide leakage models, identify relevant attack targets, and define evaluation benchmarks for integer-based HHE-tailored SE schemes, thereby filling a longstanding gap and laying the foundation for side-channel-resilient design in this area.
High-Performance SIMD Software for Spielman Codes in Zero-Knowledge Proofs
We present the first high-performance SIMD software implementation of Spielman codes for their use in polynomial commitment schemes and zero-knowledge proofs. Spielman codes, as used in the Brakedown framework, are attractive alternatives to Reed-Solomon codes and benefit from linear-time complexity and field agnosticism. However, the practical deployment of Spielman codes has been hindered by a lack of research on efficient implementations. The involved costly finite-field arithmetic and random memory accesses operate on large volumes of data, typically exceeding gigabytes; these pose significant challenges for performance gains. To address these challenges, we propose several computational and memory-related optimizations that together reach an order-of-magnitude performance improvement in software. On the computation side, we propose SIMD optimizations using the AVX-512-IFMA instruction set and introduce a lazy reduction method to minimize the modular arithmetic cost. On the memory side, we implement a cache-friendly memory layout and a slicing technique, which exploit the CPU memory hierarchy. Finally, we present our multithreading approach to improve throughput without saturating memory bandwidth. Compared to prior Spielman software, our optimizations achieve speedups of up to 26.7x and 20.6x for single- and multi-threaded execution, respectively. In addition, instantiating our software with 64 threads on a high-end CPU even outperforms a recent FPGA accelerator by up to 4.3x for small and mid-sized polynomials. Our improvements make Spielman codes competitive with well-optimized Reed-Solomon codes on software platforms.
Gravity of the Situation:Security Analysis on Rocket.Chat E2EE
Rocket.Chat is a group chat platform widely deployed in industries and national organizations, with over 15 million users across 150 countries.
One of its main features is an end-to-end encryption (E2EE) protocol; however, no cryptographic security analysis has been conducted.
We conduct an in-depth cryptographic analysis of Rocket.Chat's E2EE protocol and identify multiple significant flaws that allow a malicious server or even an outsider to break the confidentiality and integrity of the group chat.
Specifically, we formally model and analyze the protocol using ProVerif under the Dolev-Yao model, uncovering multiple theoretical weaknesses and verifying that some of them lead to practical attacks.
Furthermore, through meticulous manual analysis, we identify additional vulnerabilities, including implementation flaws and cryptographic weaknesses such as CBC malleability, and demonstrate how they are exploitable in practical attack scenarios.
To validate our findings, we develop Proof-of-Concept implementations, highlighting the real-world feasibility of these attacks.
We also propose mitigation techniques and discuss the implications of our attacks.
The Nonlinear Filter Model of Stream Cipher Redivivus
The nonlinear filter model is an old and well understood approach to the design of secure stream ciphers. Extensive research over several decades has shown how to attack stream ciphers based on this model and has identified the security properties required of the Boolean function used as the filtering function to resist such attacks. This led to the problem of constructing Boolean functions which provide adequate security and at the same time are efficient to implement. Unfortunately, over the last two decades no fully satisfactory solutions to this problem appeared in the literature. The lack of good solutions has effectively led to the nonlinear filter model becoming more or less obsolete. This is a big loss to the cryptographic design toolkit, since the great advantages of the nonlinear filter model are its simplicity, well understood security and the potential to provide low cost solutions for hardware oriented stream ciphers. In this paper, we revive the nonlinear filter model by constructing appropriate Boolean functions which provide required security and are also efficient to implement. We put forward concrete suggestions of stream ciphers which are $\kappa$-bit secure against known types of attacks for $\kappa=80$, 128, 160, 192, 224 and 256. For the 80-bit and the 128-bit security levels, the gate count estimates of our proposals compare quite well to the famous ciphers Trivium and Grain-128a respectively, while for the 256-bit security level, we do not know of any other stream cipher design which has such a low gate count.
Far-Field $Singing$ FPGAs: Repurposing Routing Fabrics into 100 m Covert Radiators
FPGAs rely on highly dense and symmetric internal
routing networks to interconnect their configurable logic ele-
ments. In standard applications, these interconnects are used
solely for digital signal transfer within the device, leaving many
routing paths idle. We study the surprising ability of configurable
FPGA routing fabrics to act as intentional radiators when struc-
tured and driven coherently. Building on prior near-field demon-
strations (few centimeters), we (i) present a practical toolchain
and methodology for synthesizing “fabric-only” antennas using
constrained placement/routing; (ii) demonstrate reliable far-field
reception for extremely long ranges (≤ 100 m) and quantified
bit-error performance at meter-scale ranges using ASK/FSK
modulation and simple ECC; and (iii) analyze the security
implications by formalizing adversary capabilities, enumerating
novel multi-tenant attack vectors, and outlining detection and
mitigation strategies. Our work bridges implementation engineer-
ing, complex physical-layer measurement (with a set of complex
Far-Field measurement apparatus), and security analysis, and
highlights the urgent need for screening and runtime monitoring
in shared FPGA environments. We have systematically shaped
and combined unused paths into a contiguous structure, such as
{Fractal, loop, Dipole, Snake, Spiral, Array}-Antennas, which
required building an automation tool-chain. When energized, this
embedded structure emits measurable electromagnetic energy
that can serve as a stealth communication channel. We’ve
extended this concept far beyond previous near-field demonstra-
tions, achieving reliable reception in the Far-Field, demonstrated
rigorously with various measurements setups - a first for this
class of long-range FPGA-based antennas without any external
radiating RF hardware from a tiny ∼ 1x1 cm2 device. We
further show a Trojan example while triggering it with rare
events attacking a Decryption Oracle model
ALKAID: Accelerating Three-Party Boolean Circuits by Mixing Correlations and Redundancy
Secure three-party computation (3PC) with semi-honest security under an honest majority offers notable efficiency in computation and communication; for Boolean circuits, each party sends a single bit for every AND gate, and nothing for XOR. However, round complexity remains a significant challenge, especially in high-latency networks. Some works can support multi-input AND and thereby reduce online round complexity, but they require \textit{exponential} communication for generating the correlations in either preprocessing or online phase. How to extend the AND gate to multi-input while maintaining high correlation generation efficiency is still not solved.
To address this problem, we propose a round-efficient 3PC framework ALKAID for Boolean circuits through improved multi-input AND gate. By mixing correlations and redundancy, we propose a concretely efficient correlation generation approach for small input bits $N<4$ and shift the correlation generation to the preprocessing phase. Building on this, we create a round-efficient AND protocol for general cases with $N>4$. Exploiting the improved multi-input AND gates, we design fast depth-optimized parallel prefix adder and share conversion primitives in 3PC, achieved with new techniques and optimizations for better concrete efficiency. We further apply these optimized primitives to enhance the efficiency of secure non-linear functions in machine learning. We implement ALKAID and extensively evaluate its performance. Compared to state of the arts like ABY3 (CCS'2018), Trifecta (PoPETs'2023), and METEOR (WWW'2023), ALKAID enjoys $1.5\times$--$2.5\times$ efficiency improvements for boolean primitives and non-linear functions, with better or comparable communication.
Yoyo tricks with a BEANIE
BEANIE is a 32-bit tweakable block cipher, published in ToSC 2025.4, designed for memory encryption of microcontroller units. In this paper, we propose its first third-party analysis and present a key recovery against the full 5+5 rounds of BEANIE using a yoyo distinguisher. The attack has a cost close to the security claim of $2^{80}$ time and $2^{40}$ data.
SoK: Verifiable Federated Learning
Federated Learning (FL) is an advancement in Machine Learning motivated by the need to preserve the privacy of the data used to train models. While it effectively addresses this issue, the multi-participant paradigm on which it is based introduces several challenges. Among these are the risks that participating entities may behave dishonestly and fail to perform their tasks correctly. Moreover, due to the distributed nature of the architecture, attacks such as Sybil and collusion are possible. Recently, with advances in Verifiable Computation (VC) and Zero-Knowledge Proofs (ZKP), researchers have begun exploring how to apply these technologies to Federated Learning aiming to mitigate such problems. In this Systematization of Knowledge, we analyze the first, very recent works that attempt to integrate verifiability features into classical FL tasks, comparing their approaches and highlighting what is achievable with the current state of VC methods.
Who Verifies the Verifiers? Lessons Learned From Formally Verified Line-Point Zero-Knowledge
Computer-aided cryptography, with particular emphasis on formal verification, promises an interesting avenue to establish strong guarantees about cryptographic primitives. The appeal of formal verification is to replace the error-prone pen-and-paper proofs with a proof that was checked by a computer and, therefore, does not need to be checked by a human. In this paper, we ask the question of how reliable are these machine-checked proofs by analyzing a formally verified implementation of the Line-Point Zero-Knowledge (LPZK) protocol (Dittmer, Eldefrawy, Graham-Lengrand, Lu, Ostrovsky and Pereira, CCS 2023). The implementation was developed in EasyCrypt and compiled into OCaml code that was claimed to be high-assurance, i.e., that offers the formal guarantees of guarantees of completeness, soundness, and zero knowledge.
We show that despite these formal claims, the EasyCrypt model was flawed, and the implementation (supposed to be high-assurance) had critical security vulnerabilities. Concretely, we demonstrate that: 1) the EasyCrypt soundness proof was incorrectly done, allowing an attack on the scheme that leads honest verifiers into accepting false statements; and 2) the EasyCrypt formalization inherited a deficient model of zero knowledge for a class of non-interactive zero knowledge protocols that also allows the verifier to recover the witness. In addition, we demonstrate 3) a gap in the proof of the perfect zero knowledge property of the LPZK variant of Dittmer, Ishai, Lu and Ostrovsky (CCS 2022) that the EasyCrypt proof is based, which, depending on the interpretation of the protocol and security claim, could allow a malicious verifier to learn the witness.
Our findings highlight the importance of scrutinizing machine-checked proofs, including their models and assumptions. We offer lessons learned for both users and reviewers of tools like EasyCrypt, aimed at improving the transparency, rigor, and accessibility of machine-checked proofs. By sharing our methodology and challenges, we hope to foster a culture of deeper engagement with formal verification in the cryptographic community.
A Unified Key Recovery Framework for Impossible Boomerang Attacks: Applications to Full-Round-ARADI and SKINNYe v2
The impossible boomerang attack is a powerful cryptanalytic technique, but existing key recovery methods face several limitations that restrict its applicability. Specifically, the key pre-guessing is coarse-grained, S-box details are ignored in the differential propagation, the complexity estimation and the key guessing order determination remain rudimentary. To overcome these issues, we introduce three key improvement measures. First, we propose a flexible partial key and difference pre-guessing technique based on directed graphs, enabling selective identification of required keys and differences for generating partial pairs and quartets. Second, we propose a pre-sieving technique to early eliminate invalid quartets by exploiting cipher-specific details. Third, we introduce an automatic key-guessing strategy based on the same directed graphs to efficiently determine valid guessing orders. We integrate these techniques to develop a unified key recovery framework for impossible boomerang attacks, accompanied by a formal and precise characterization of the overall complexity. This is the first framework to support flexible key and difference pre-guessing while incorporating block cipher details during key recovery for impossible boomerang attacks. Crucially, it enables the automatic generation of detailed recovery steps, a capability missing in prior work.
As applications, under the four related-key/tweakey setting, we apply the framework to \ARADI{}, a low-latency cipher proposed by the National Security Agency (NSA), and \SKV{}, a threshold-implementation-friendly cipher proposed at EUROCRYPT 2020. For \ARADI{}, we achieve the first full-round attack with $2^{130}$ data, $2^{253.78}$ time, and $2^{235.75}$ memory complexity. For \SKV{}, we present the first 34-round impossible boomerang attack with $2^{66}$ data, $2^{253.75}$ time, and $2^{239.75}$ memory complexity. These results demonstrate the framework’s significance and its substantial improvement in advancing the impossible boomerang attack.
Quantum Periodic Distinguisher Construction: Symbolization Method and Automated Tool
As one of the famous quantum algorithms, Simon's algorithm enables the efficient derivation of the period of periodic functions in polynomial time. However, the complexity of constructing periodic functions has hindered the widespread application of Simon's algorithm in symmetric-key cryptanalysis. Currently, aside from the exhaustive search-based testing method introduced by Canale et al. at CRYPTO 2022, there is no unified model for effectively searching for periodic distinguishers. Although Xiang et al. established a link between periodic functions and truncated differential theory at ToSC 2024, their approach lacks the ability to construct periods using unknown differentials and does not provide automated tools. This limitation underscores the inadequacy of existing methods in identifying periodic distinguishers for complex structures. In this paper, we address the challenge of advancing periodic distinguishers for symmetric-key ciphers. First, we propose a more generalized method for constructing periodic distinguishers, addressing the limitations of Xiang et al.'s theory in handling unknown differences. We further extend it to probabilistic periodic distinguishers. As a result, our method can cover a wider range of periodic distinguishers. Second, we introduce a novel symbolic representation to simplify the search for periodic distinguishers, and propose the first fully automated SMT-based search model, which efficiently addresses the challenges of manual searching in complex structures. Based on our method, we have achieved new quantum distinguishers with the following round configurations: 10 rounds for GFS-4F, 10 rounds for LBlock, 10 rounds for TWINE, and 16 rounds for Skipjack-B, improving the previous best results by 1, 2, 2, and 3 rounds, respectively. Our model also identifies the first 7/8/9-round periodic distinguishers for SKINNY. Compared with existing distinguishers (Hadipour et al., CRYPTO 2024) with the same round in the classical setting, our distinguishers achieve lower data complexity.
An Ideal Linear Secret Sharing Scheme for Complete $t$-Partite $k$-Uniform Hypergraph Access Structures
Secret sharing schemes represent a crucial cryptographic protocol, with linear codes serving as a primary tool for their construction. This paper systematically investigates the construction of ideal secret sharing schemes for complete $t$-partite $k$-uniform hypergraph access structures using linear codes as the tool. First, it is proved that the generator matrix $G$ of an ideal linear code realizing a complete $t$-partite $2$-uniform hypergraph access structure must have a rank of $2$. Simultaneously, a novel method for constructing an ideal secret sharing scheme that realizes such access structures is proposed. Building on this foundation, the case of complete $t$-partite $2$-uniform hypergraphs is extended to complete $t$-partite $k$-uniform hypergraphs, and a method for constructing ideal secret sharing schemes to realize them is provided. Compared with existing approaches, both Shamir’s method and the scheme proposed by Brickell et al. are special cases of our proposed approach.
Fully Distributed Multi-Point Functions for PCGs and Beyond
We introduce new {Distributed Multi-Point Function} (DMPF) constructions that make multi-point sharing as practical as the classic single-point (DPF) case. Our main construction, {Reverse Cuckoo}, replaces the ``theoretical'' cuckoo insertions approach to DMPFs with a MPC-friendly linear solver that circumvents the concrete inefficiencies. Combined with our new sparse DPF construction, we obtain the first fully distributed and efficient DMPF key generation that avoids trusted dealers and integrates cleanly with standard two-party MPC.
Applied to pseudorandom correlation generators (PCGs), our DMPFs remove the dominant “sum of $t$ DPFs'' bottleneck. In Ring-LPN and Stationary-LPN pipelines (Crypto 2020, 2025), this translates to {an order of magnitude more Beaver triples per second} with {an order of magnitude less communication} compared to the status quo by Keller et al (Eurocrypt 2018). The gains persist across fields and rings ($\mathbb{F}_{p^k}$, $\mathbb{Z}_{2^k}$ for $k\geq 1$) and are complementary to existing PCG frameworks: our constructions drop in as a black-box replacement for their sparse multi-point steps, accelerating {all} PCGs that rely on such encodings.
We provide a complete protocol suite (deduplication, hashing, linear solver, sparse DPF instantiation) with a semi-honest security proof via a straight-line simulator that reveals only hash descriptors and aborts with negligible (cuckoo-style) probability. A prototype implementation validates the asymptotics with strong concrete performance improvements.
TAPAS: Datasets for Learning the Learning with Errors Problem
AI-powered attacks on Learning with Errors (LWE), an important hard math problem in post-quantum cryptography, rival or outperform "classical" attacks on LWE under certain parameter settings. Despite the promise of this approach, a dearth of accessible data limits AI practitioners' ability to study and improve these attacks. Creating LWE data for AI model training is time- and compute-intensive and requires significant domain expertise. To fill this gap and accelerate AI research on LWE attacks, we propose the TAPAS datasets, a Toolkit for Analysis of Post-quantum cryptography using AI Systems. These datasets cover several LWE settings and can be used off-the-shelf by AI practitioners to prototype new approaches to cracking LWE. This work documents TAPAS dataset creation, establishes attack performance baselines, and lays out directions for future work.
LAKE: Lattice-Code Accelerated Kyber Encapsulation
The standardization of CRYSTALS-Kyber (ML-KEM) by NIST represents a milestone in post-quantum security, yet its substantial communication overhead remains a critical bottleneck for resource-constrained environments. This paper introduces <i>LAKE (Lattice-Code Accelerated Kyber Encapsulation)</i>, a novel cryptographic framework that symbiotically integrates coding theory into the Module-LWE structure. Unlike previous concatenation approaches, LAKE embeds density-optimized Construction-A lattices derived from Polar codes directly into the public matrix generation. This structural innovation yields a <i>15–25% reduction in ciphertext size</i> while simultaneously improving the Decryption Failure Rate (DFR) from \(2^{-139}\) to <i>\(2^{-156}\)</i>, leveraging innate coding gains to suppress noise. We provide a rigorous reduction of LAKE's IND-CCA2 security to the hardness of the Structured Module-LWE problem. Although LAKE introduces a modest 8–15% computational overhead, it optimizes the critical "Compute-for-Bandwidth" trade-off, exploiting the asymmetry between low-cost local processing and high-cost transmission. Consequently, LAKE significantly enhances deployment viability in high-latency, energy-sensitive domains such as Satellite Communications (SatCom), Narrowband-IoT (NB-IoT), and tactical edge networks, where transmission efficiency is the dominant performance metric.
Key Recovery Attacks on ZIP Ciphers: Application to ZIP-AES and ZIP-GIFT
The construction of building beyond-birthday-bound secure pseudorandom functions (PRFs) from the Xor-sum of 2 pseudorandom permutations (PRPs) has been known since EUROCRYPT 1998. However, the first concrete instance was only published recently at FSE 2022: the low latency PRF Orthros. Subsequently, at ASIACRYPT 2024, Flórez-Gutiérrez et al. proposed the general framework of ZIP ciphers, where a block cipher $E_{1} \circ E_{0}$ is used to construct the PRF $E_{0} \oplus E_{1}^{-1}$. This allows re-using some of the cryptanalysis of the underlying block cipher. They propose the PRF ZIP-AES, as the Xor sum of 5 AES encryption rounds and 5 decryption rounds. They discuss differential, linear, and integral distinguishers for this construction, but provide no concrete key recovery attacks. Furthermore, they propose ZIP-GIFT as a 64-bit PRF but leave cryptanalysis as future work. In this work, we provide the first third-party analysis of ZIP-AES and ZIP-GIFT. We focus our efforts on the unique challenges of performing key recovery attacks for ZIP ciphers and propose new techniques to overcome these challenges. We show differential, linear, and integral key recovery attacks for both PRFs. We develop new techniques for integral key recovery attacks and show how to extend differential characteristics by some rounds for key recovery.
Refined Modelling of the Primal Attack, and Variants Against Module-Learning With Errors
The primal attack reduces Learning with Errors (LWE) to the unique Shortest Vector Problem (uSVP), and then applies lattice reduction such as BKZ to solve the latter. Estimating the cost of the attack is required to evaluate the security of constructions based on LWE.
Existing fine-grained estimators for the cost of the primal attack, due to Dachman-Soled--Ducas--Gong--Rossi (CRYPTO 2020) and Postlethwaite--Virdia (PKC 2021), differ from experimental data as they implicitly assume the unique shortest vector is resampled several times during the attack, changing its length. Furthermore, these estimators consider only the first two moments of the LWE secret and error, and therefore do not differentiate between distinct centred distributions with equal variances. We remedy both issues by initially fixing the short vector's length, and later integrating over its distribution. We provide extensive experimental evidence that our estimators are more accurate and faithfully capture the behaviour of different LWE distributions.
In the case of Module-LWE, lattice reduction utilising the module structure could lead to cheaper attacks. We build upon the analysis of module lattice reduction by Ducas--Engelberts--Perthuis (Asiacrypt 2025), providing a simulator for Module-BKZ generalising the BKZ simulator of Chen--Nguyen (Asiacrypt 2011). We design estimators for a module variant of the primal attack, supporting our analysis with experimental evidence. Asymptotically, we show the module primal attack over a degree $d$ number field $K$ has a reduced cost, resulting in a subexponential gain, whenever the discriminant $\Delta_K$ satisfies $\vert \Delta_K \vert < d^d$, one such case being non-power-two cyclotomics.
Towards Practical Multi-Party Hash Chains using Arithmetization-Oriented Primitives - With Applications to Threshold Hash-Based Signatures
Uncategorized
Uncategorized
Despite their simplicity and quantum-resistant security properties, the deployment of hash chains in distributed settings through secure multi-party computation (MPC) has been demonstrated to be impractical when employing traditional hash functions (i.e., SHA2/SHA3) due to their high number of non-linear gates which lead to heavy computational costs. In this work, we present a comprehensive evaluation of hash chain computations over MPC using arithmetization-oriented (AO) primitives, specifically focusing on the Poseidon2 family of hash functions. We systematically analyze the MPC-friendliness of various Poseidon2 instantiations across different prime fields and parameter choices to minimize both multiplicative depth and preprocessing requirements. We conduct extensive benchmarks using the MP-SPDZ framework across three state-of-the-art MPC protocols under varying network conditions and adversarial models. We further explore practical applications to threshold cryptography, presenting optimized implementations of threshold hash-based signatures that achieve signing times less than 1 second in a 3-party setting for practical parameter sets.
Specifically, we demonstrate how structural parallelism in hash-based signatures can be exploited to batch independent hash chains within a single MPC execution, and introduce a time-memory trade-off that enables non-interactive online signature generation through systematic precomputation of all chain intermediates. Our work suggests the practical viability of moderate length AO-based hash chains for MPC applications.
Fourier Sparsity of Delta Functions and Matching Vector PIRs
In this paper we study a basic and natural question about Fourier analysis of Boolean functions, which has applications to the study of Matching Vector based Private Information Retrieval (PIR) schemes.
For integers $m,r$, define a delta function on $\{0,1\}^r \subseteq \mathbb{Z}_m^r$ to be a function $f: \mathbb{Z}_m^r \to \mathbb C$ if $f(0) = 1$ and $f(x) = 0$ for all nonzero Boolean $x$.
The basic question that we study is how small can the Fourier sparsity of a delta function be; namely, how sparse can such an $f$ be in the Fourier basis?
In addition to being intrinsically interesting and natural, such questions arise naturally while studying "$S$-decoding polynomials" for the known matching vector families. Finding $S$-decoding polynomials of reduced sparsity -- which corresponds to finding delta functions with low Fourier sparsity -- would improve the current best PIR schemes.
We show nontrivial upper and lower bounds on the Fourier sparsity of delta functions.
Our proofs are elementary and clean. These results imply limitations on improvements to the Matching Vector PIR schemes simply by finding better $S$-decoding polynomials. In particular, there are no $S$-decoding polynomials which can make Matching Vector PIRs based on the known matching vector families achieve polylogarithmic communication for constantly many servers.
Many interesting questions remain open.
Achieving CPAD security for BFV: a pragmatic approach
Fully Homomorphic Encryption (FHE) aims at ensuring privacy of sensitive data while taking advantage of external computations and services. However, using FHE in real-world scenarios reveals new kinds of security issues. In particular, following Li&Micciancio Eurocrypt'21 seminal paper, CPAD security has emerged as a fundamental notion for FHE, unveiling a subtle interplay between security and correctness. For correct (F)HE schemes, CPA security already implies CPAD. However, all known practical FHE schemes are (R)LWE-based and, as such, are prone to decryption errors; and even if it is possible to ensure statistical correctness by selecting appropriate parameters, achieving this while maintaining malleability --- the mainspring of FHE --- still remains challenging. Moreover, practical CPAD attacks have recently been designed against most known FHE schemes. We propose in this paper a complete, simple and rigorous framework to reach CPAD security for one of them, BFV.
Our approach relies on a combination of alternate average-case/worst-case noise variance monitoring --- based on dependencies tracking during the homomorphic calculations --- and on smudging. It comes with an automated parameters setting methodology, which connects it to the recently proposed Application-Aware HE paradigm while relieving libraries end-users from the burden of enforcing the paradigm's constraints by hand.
MIOPE: A Modular framework for Input and Output Privacy in Ensemble inference
We introduce a simple yet novel framework for privacy-preserving machine learning inference that allows a client to query multiple models without a trusted third party aggregator by leveraging homomorphically encrypted model evaluation and multi-party computation. This setting allows for dispersed training of models such that a client can query each separately, and aggregate the results of this `ensemble inference'; this avoids the data leakage inherent to techniques that train collectively such as federated learning. Our framework, which we call MIOPE, allows the data providers to keep the training phase local to provide tighter control over these models, and additionally provides the benefit of easily retraining to improve inference of the ensemble. MIOPE uses homomorphic encryption to keep the querying client's data private and multi-party computation to hide the individual model outputs. We illustrate the design and trade-offs of input- and output-hiding ensemble inference as provided by MIOPE and compare performance to a centralized approach.We evaluate our approach with a standard dataset and various regression models and observe that the MIOPE framework can lead to accuracy scores that are only marginally lower than centralized learning. The modular design of our approach allows the system to adapt to new data, better models, or security requirements of the involved parties.
On Reed–Solomon Proximity Gaps Conjectures
We disprove a range of conjectures for Reed-Solomon codes underpinning the security and efficiency of many modern proof systems, including SNARKs based on FRI (Ben-Sasson-Bentov-Horesh-Riabzev, ICALP’18), DEEP-FRI (Ben-Sasson-Goldberg-Kopparty-Saraf, ITCS’20), STIR (Arnon-Chiesa-Fenzi-Yogev, CRYPTO’24), and WHIR (Arnon-Chiesa-Fenzi-Yogev, preprint). Concretely, we prove that the following conjectures are false:
1. The correlated agreement up-to-capacity conjecture of Ben-Sasson-Carmon-Ishai-Kopparty-Saraf (J. ACM’23),
2. The mutual correlated agreement up-to-capacity conjecture of WHIR,
3. The list-decodability up-to-capacity conjecture of DEEP-FRI, which follows from existing results in the literature.
We then propose minimal modifications to these conjectures up to the list-decoding capacity bound.
Our second main contribution is a proof that correlated agreement with small enough error probability implies list decoding of Reed-Solomon codes. Thus, any future positive results on our correlated agreement conjectures with small enough error probability would imply similar results in classical list decoding. A reduction from proximity gaps to list-decodability was heretofore a natural open problem.
Improving the Efficiency of zkSNARKs for Ballot Validity
Homomorphic tallying in secure e-voting protocols enables privacy-preserving vote aggregation. For this approach, zero-knowledge proofs (ZKPs) for ensuring the validity of encrypted ballots are an essential component.
While it has been common to construct tailored ZKPs for every kind of ballot and voting method at hand, recently Huber et al. demonstrated that also general-purpose ZKPs (GPZKPs), such as Groth16 zkSNARKs, are suited for checking ballot validity. Unlike tailored solutions, GPZKPs provide a unified, generic, and flexible framework for this task. In this work, we improve on the initial GPZKPs for ballot validity proposed by Huber et al. Specifically, we present several circuit-level optimizations that significantly reduce proving costs for exponential ElGamal-encrypted ballots. We provide an independent, ready-to-use Circom implementation along with concrete benchmarks, demonstrating substantial improvements in performance and practical usability over prior implementations.
Turning Multiple Key-Dependent Attacks into Universal Attacks
Key-dependent attacks are effective only for specific weak-key classes, limiting their practical impact. We present a generic statistical framework that combines multiple key-dependent distinguishers into universal attacks covering the full key space. Using log-likelihood ratio statistics, our framework tests the secret key against multiple weak-key distinguishers, aggregates their evidence to determine whether the key is weak or strong for each distinguisher, and exploits this classification to reduce the effective key entropy for key recovery.
We apply this to Orthros-PRF, a sum-of-permutations (SoP) design where any differential-based distinguisher holds only for a fraction of keys. This yields the first universal 8-round differential-linear (DL) key-recovery attack with median time complexity $2^{119.58}$, whereas prior work reached at most 7 rounds in the weak-key setting. To discover the required distinguishers, we extend the open-source S-box Analyzer tool with MILP support for deterministic propagation and develop a model integrating distinguisher search with key recovery. This enables automated discovery of multidimensional DL distinguishers covering up to 10 rounds in each Orthros branch, improving prior work by 4 rounds.
Our results demonstrate that statistical aggregation of multiple weak-key distinguishers enables effective universal cryptanalysis. Our framework is generic and is applicable to other primitives with multiple identifiable weak-key classes.
Laminate: Succinct SIMD-Friendly Verifiable FHE
In outsourcing computation to untrusted servers, one can cryptographically ensure privacy using Fully Homomorphic Encryption (FHE) or ensure integrity using Verifiable Computation (VC) such as SNARK proofs. While each is practical for some applications in isolation, efficiently composing FHE and VC into Verifiable Computing on Encrypted Data (VCoED) remains an open problem.
We introduce Laminate, the first practical method for adding integrity to BGV-style FHE, thereby achieving VCoED. Our approach combines the blind interactive proof framework with a tailored variant of the GKR proof system that avoids committing to intermediate computation states. We further introduce variants employing transcript packing and folding techniques. The resulting encrypted proofs are concretely succinct: 270kB, compared to 1TB in prior work, to evaluate a batch of $B=2^{14}$ instances of size $n=2^{20}$ and depth $d=32$. Asymptotically, the proof size and verifier work is $O(d \log (Bn))$, compared to $\Omega(BN\log n)$ in prior work (for ring dimension $N$).
Unlike prior schemes, Laminate utilizes the full SIMD capabilities of FHE for both the payload circuit evaluation and proof generation; adds only constant multiplicative depth on top of payload evaluation while performing $\tilde{O}(n)$ FHE operations; eliminates the need for witness reduction; and is field-agnostic. The resulting cost of adding integrity to FHE, compared to assuming honest evaluation, is ${\sim}12\times$ to ${\sim}36\times$ overhead (for deep multiplication-heavy circuits of size $2^{20}$), which is $>500\times$ faster than the state-of-the-art.
Meta-PBS: Compact High-Precision Programmable Bootstrapping
Currently, most FHE schemes realize bootstrapping through the linear-decrypt-then-round paradigm. For the programmable bootstrapping (PBS) of TFHE, this means the lookup table (LUT) needs a redundancy of $O(\sqrt{N})$ to be able to remove the modulus switching noise, which limits the plaintext modulus of PBS to $O(\sqrt{N})$. We remove this requirement for redundancy by proposing the Meta-PBS framework, which allows us to start with under-redundant or non-redundant LUTs. Meta-PBS iteratively blind-rotates the LUT, during which the LUT redundancy gradually increases. The bootstrapping outputs the correct result when the redundancy eventually exceeds the noise bound. Asymptotically, Meta-PBS requires $O(1)$ blind rotations in dimension $N$ to evaluate a negacyclic function modulo $2N$, whereas PBS needs $O(\sqrt{N})$ blind rotations. Meta-PBS also enjoys an additive noise growth, allowing for more homomorphic arithmetic on bootstrapped ciphertext. We modified Meta-PBS to support the simultaneous evaluation of multiple LUTs on the same ciphertext and/or arbitrary LUTs. According to our implementation, when evaluating a 12-bit negacyclic function, Meta-PBS outperforms EBS (PKC'23) by 79 times. When evaluating an arbitrary function on an 8-bit LWE ciphertext, Meta-PBS reduces the running time of the Refined LHE (CCS'25) by half while allowing for a 27 times larger post-bootstrap linear combination.
PULSE: Parallel Private Set Union for Large-Scale Entities
Multi-party private set union (mPSU) allows multiple parties to compute the union of their private input sets without revealing any additional information.
Existing efficient mPSU protocols can be categorized into symmetric key encryption (SKE)-based and public key encryption (PKE)-based approaches. However, neither type of mPSU protocol scales efficiently to a large number of parties, as they fail to fully utilize available computational resources, leaving participants idle during various stages of the protocol execution.
This work examines the limitation of existing protocols and proposes a unified framework for designing efficient mPSU protocols. We then introduce an efficient Parallel mPSU for Large-Scale Entities (PULSE) that enables parallel computation, allowing all parties/entities to perform computations without idle time, leading to significant efficiency improvements, particularly as the number of parties increases. Our protocol is based on PKE and secure even when up to $n-1$ semi-honest parties are corrupted. We implemented PULSE and compared it to state-of-the-art mPSU protocols under different settings, showing a speedup of $1.91$ to $3.57\times$ for $n=8$ parties for various set sizes.
Cryptanalysis of Pseudorandom Error-Correcting Codes
Pseudorandom error-correcting codes (PRC) is a novel cryptographic primitive proposed at CRYPTO 2024. Due to the dual capability of pseudorandomness and error correction, PRC has been recognized as a promising foundational component for watermarking AI-generated content. However, the security of PRC has not been thoroughly analyzed, especially with concrete parameters or even in the face of cryptographic attacks. To fill this gap, we present the first cryptanalysis of PRC. We first propose three attacks to challenge the undetectability and robustness assumptions of PRC. Among them, two attacks aim to distinguish PRC-based codewords from plain vectors, and one attack aims to compromise the decoding process of PRC. Our attacks successfully undermine the claimed security guarantees across all parameter configurations. Notably, our attack can detect the presence of a watermark with overwhelming probability at a cost of $2^{22}$ operations. We also validate our approach by attacking real-world large generative models such as DeepSeek and Stable Diffusion. To mitigate our attacks, we further propose three defenses to enhance the security of PRC, including parameter suggestions, implementation suggestions, and constructing a revised key generation algorithm. Our proposed revised key generation function effectively prevents the occurrence of weak keys. However, we highlight that the current PRC-based watermarking scheme still cannot achieve a 128-bit security under our parameter suggestions due to the inherent configurations of large generative models, such as the maximum output length of large language models.
When Simple Permutations Mix Poorly: Limited Independence Does Not Imply Pseudorandomness
Over the past two decades, several works have used (almost) $k$-wise independence as a proxy for pseudorandomness in block ciphers, since it guarantees resistance against broad classes of statistical attacks. For example, even the case $k = 2$ already implies security against differential and linear cryptanalysis.
Hoory, Magen, Myers, and Rackoff (ICALP ’04; TCS ’05) formulated an appealing conjecture: if the sequential composition of $T$ independent local randomized permutations is (close to) four-wise independent, then it should also be a pseudorandom permutation. Here, "local" means that each output bit depends on only a constant number of input bits. This conjecture offers a potential strong justification for analyses of block ciphers that establish (almost) $k$-wise independence of this type of constructions.
In this work, we disprove the conjecture in full generality by presenting an explicit local randomized permutation whose sequential composition is four-wise independent, but not a pseudorandom permutation. Our counterexample in fact extends to $k$-wise independence for any constant $k$.
$k$-out-of-$n$ Proofs and Application to Privacy-Preserving Cryptocurrencies
Cryptocurrencies enable transactions among mutually distrustful users. While UTXO-based cryptocurrencies offer mature solutions achieving strong privacy and supporting multi-receiver transfers, account-based cryptocurrencies currently lack practical solutions that simultaneously guarantee these properties.
To close this gap, we propose a generic framework for account-based cryptocurrencies that attains strong privacy and supports multi-receiver transfers, and then give a practical instantiation called \textit{Anonymous PGC}. Our system also outperforms in efficiency: for a 64-sized anonymity set and 8 receivers, Anonymous PGC achieves 2.4$\times$ faster transaction generation, 5.7$\times$ faster verification, and 2.2$\times$ reduction in transaction size compared to state-of-the-art Anonymous Zether (IEEE S\&P 2021), which offers only weak privacy and no multi-receiver support.
At the core of Anonymous PGC are two novel zero-knowledge proofs of partial knowledge. First, we generalize the Groth-Kohlweiss (GK) $1$-out-of-$n$ proof (EUROCRYPT 2015) to the $k$-out-of-$n$ case, resolving an open problem regarding its generalization. Particularly, the obtained proof lends itself to seamlessly solder with range proofs, yielding an efficient $k$-out-of-$n$ range proof that demonstrates $k$ witnesses among $n$ instances lie in specific ranges. Second, we extend the Attema-Cramer-Fehr (ACF) $k$-out-of-$n$ proof (CRYPTO 2021) to support distinct group homomorphisms, boosting its expressiveness while slashing both prover and verifier complexities from quadratic to linear. We believe these proofs are of independent interest in broader privacy-preserving applications.
SNARK Lower Bounds via Communication Complexity
We initiate the study of lower bounding the verification time of Succinct Non-interactive ARguments of Knowledge (SNARKs) built in the Polynomial Interactive Oracle Proof + Polynomial Commitment Scheme paradigm. The verification time of these SNARKs is generally dominated by the polynomial commitment scheme, and so we want to understand if polynomial commitment schemes admit lower bounds on the verification time. By recognizing that polynomial commitment schemes are also often built by applying cryptography to some information-theoretic core protocol, we seek to separate this core from the cryptography in a way that meaningfully captures the verification time required by the polynomial commitment scheme verifier.
We provide strong evidence that several polynomial commitment schemes have (nearly) optimal verifier times. Our evidence comes from connecting polynomial commitment schemes to certain information-theoretic protocols known as communication protocols from the field of communication complexity, a link which we believe to be of independent interest. Through this lens, we model the verifier work in the cryptographic protocols as information (i.e., number of bits) exchanged between parties in the communication protocols, allowing us to leverage lower bounds from communication complexity. These lower bounds give strong evidence that the verifier time in these polynomial commitment schemes must be at least the number of bits exchanged in the communication protocol.
We extract the communication protocol cores of three polynomial commitment schemes and lower bound the bits exchanged in these cores. The lower bounds we obtain match (up to poly-logarithmic factors) the best-known (asymptotic) verification times of the polynomial commitment schemes we examine in this work. Specifically, we show that for univariate/multilinear polynomials of size $N=2^n$:
- the communication core of Hyrax PCS (Wahby et al., S&P 2016) requires $\Omega(\sqrt{N})$ bits to be exchanged;
- the communication core of Bulletproofs PCS (Bootle et al., EUROCRYPT 2016; Bünz et al., S&P 2018) requires $\Omega(N)$ bits to be exchanged; and
- the communication core of Dory PCS (Lee, TCC 2021) requires $\Omega(\log(N))$ bits to be exchanged.
Our results strongly suggest a negative answer to a longstanding open question on whether the Bulletproofs verifier can be made sublinear time.
UFOs: An Ultra-fast Toolkit for Multiparty Computation of Small Elements
In most secure multiparty computation (MPC) scenarios, the data to be processed are much smaller than the underlying field size. The field is typically chosen to be large enough to guarantee security, e.g., a 128-bit prime field for 128-bit security, while the data can be as small as several bits, e.g. $4$ bits for a $16$-category classification task. This size gap can result in significant waste of communication and computation in existing MPC protocols, which often treat data of different ranges indiscriminately.
We introduce UFO$_\mathrm{s}$, an ultra-fast toolkit for multiparty computation (MPC) on small elements. UFO$_\mathrm{s}$ provides highly optimized protocols for three fundamental tasks: one-hot encoding, comparison and digit decomposition. While these protocols are designed specifically for small elements, as a demonstration of their power, we construct a radix sort protocol that sorts large field elements. Our experiments show significant performance improvements over state-of-the-art MPC implementations. In particular, our sorting protocol achieves up to a $58\times$ speedup in the online phase when sorting $2^{16}$ elements among $5$ parties.
Return of the Kummer: a Toolbox for Genus-2 Cryptography
This work expands the machinery we have for isogeny-based cryptography in genus 2 by developing a toolbox of several essential algorithms for Kummer surfaces, the dimension-2 analogue of $x$-only arithmetic on elliptic curves. Kummer surfaces have been suggested in hyper-elliptic curve cryptography since at least the 1980s and recently these surfaces have reappeared to efficiently compute $(2,2)$-isogenies. We construct several essential analogues of techniques used in one-dimensional isogeny-based cryptography, such as pairings, deterministic point sampling and point compression and give an overview of $(2,2)$-isogenies on Kummer surfaces. We furthermore show how Scholten's construction can be used to transform isogeny-based cryptography over elliptic curves over $\mathbb{F}_{p^2}$ into protocols over Kummer surfaces over $\mathbb{F}_{p}$
As an example of this approach, we demonstrate that SQIsign verification can be performed completely on Kummer surfaces, and, therefore, that one-dimensional SQIsign verification can be viewed as a two-dimensional isogeny between products of elliptic curves, now defined over $\mathbb{F}_p$ rather than $\mathbb{F}_{p^2}$.
Dimension-Reducing Algorithms for Quaternion Ideal-SVP
We study the approximate Hermite Shortest Vector Problem (HSVP) in ideal lattices in orders of quaternion algebras. For one- and two-sided ideals respectively, we show that for almost all ideals we may solve HSVP in a sublattice of dimension at most one half (respectively, one quarter) of the original lattice dimension, with only small losses in the approximation factor. For two-sided ideals in a cryptographically-relevant family of maximal orders, we obtain approximation factors independent of the algebraic norm of the ideal. For one-sided ideals, we obtain a similar result for a large and natural family of ideal lattices. Finally, we turn our mathematical results into algorithms, and give an unconditional quantum polynomial time algorithm to solve HSVP in ideals of maximal orders of quaternion algebras, given an oracle for HSVP in ideals of maximal orders of number fields, in lower dimension.
SLIDE: Shuffle Shamir Secret Shares Uniformly with Linear Online Communication and Guaranteed Output Delivery
We revisit shuffle protocols for Shamir secret sharing. Existing constructions either produce a non-uniform shuffle or incur high communication and round complexity, in some cases exponential in the number of parties. We propose two new shuffle protocols that achieve uniform shuffle with communication complexity $O\!\left(\frac{k + l}{\log k} \, n^2 m \log m\right)$ for shuffling rows of an $m\times l$ matrix shared among $n$ parties, where $k\leq m$ is a tunable parameter that balances communication and computation. The first protocol is concretely more efficient, while the second achieves the best-known $O(nml)$ online communication and $O(n)$ rounds. Both constructions enjoy an ideal property of guaranteed output delivery (GOD). In terms of both overall and online communication, our protocols improve upon the state of the art for shuffle protocols based on Shamir secret sharing.
Our key technical ingredient is a novel permutation sharing technique that represents a permutation by smaller permutation matrices. Once permutations are shared, applying them becomes significantly cheaper, enabling a highly efficient online phase. The first protocol applies independent secret permutations from all parties in sequence, while the second protocol builds on the shuffle correlation technique of Gao et al., achieving $O(nml)$ online complexity. We further extend shuffle correlation to support GOD with linear online communication, an extension has not been previously explored, As a result, we obtain SLIDE, the first shuffle protocol achieving both $O(nml)$ online communication and guaranteed output delivery.
Our constructions rely only on the most basic primitives of Shamir secret sharing over any field $\mathbb{F}$ with $\lvert\mathbb{F}\rvert > n$. Since shuffle is a fundamental building block for higher-level MPC primitives such as sorting and oblivious data structures, our results are broadly applicable and can accelerate many real-world applications of secure multiparty computation.
Security Models and Cryptographic Protocols in a Quantum World
The emergence of quantum computing has provided new paradigms for cryptography. On the one hand, it poses significant new threats to existing classically cryptographic systems, requiring the community to define new security models that capture what a quantum adversary can do. On the other hand, it gives us new tools to design cryptographic protocols, with weaker assumptions than in the classical world, or even protocols that are impossible classically.
In this survey, we first give an overview of new security definitions for classical cryptography, considering quantum adversaries who can either only use local quantum computation (post-quantum security), or even send quantum messages and in particular have access to oracle in superposition (quantum security). We explore these new notions through the examples of commitments, zero-knowledge proofs, encryption, and signatures. Then, we present what is arguably the most famous application of quantum cryptography: quantum key distribution (QKD) protocols that take advantage of unique properties of quantum mechanics to provide secure communication unconditionally. We also explore cryptography beyond QKD, focusing on unclonable cryptography: a family of cryptographic functionalities, built with quantum states, and designed to be resistant to counterfeit by leveraging the “no-cloning” theorem. We examine in particular quantum money, but also the recent notions of unclonable encryption and copy-protection, including related variants.
By presenting a comprehensive survey of these topics, this paper aims to provide a thorough understanding of the current landscape and future potential of quantum cryptography.
BOIL: Proof-Carrying Data from Accumulation of Correlated Holographic IOPs
In this paper, we present a batching technique for oracles corresponding to codewords of a Reed–Solomon code. This protocol is inspired by the round function of the STIR protocol (CRYPTO 2024). Using this oracle batching protocol, we propose a construction of a practically efficient accumulation scheme, which we call BOIL. Our accumulation scheme can be initiated with an arbitrary correlated holographic IOP, leading to a new class of PCD constructions. The results of this paper were originally given as a presentation at zkSummit12.
Secure Distributed State Management for Stateful Signatures with a Practical and Universally Composable Protocol
Stateful signatures like the NIST standardized signature schemes LMS and XMSS provide an efficient and mature realization of post-quantum secure signature schemes. They are recommended for long-term use cases like e.g. firmware signing. However, stateful signature schemes require to properly manage a so-called state. In stateful signature schemes like LMS and XMSS, signing keys consist of a set of keys of a one-time signature scheme and it has to be guaranteed that each one-time key is used only once. This is done by updating a state in each signature computation, basically recording which one-time keys have already been used. While this is straightforward in centralized systems, in distributed systems like secure enclaves consisting of e.g. multiple hardware security modules (HSMs) with limited communication keeping a distributed state that at any point in time is consistent among all parties involved presents a challenge. This challenge is not addressed by the current standardization processes.
In this paper we present a security model for the distributed key management of post-quantum secure stateful signatures like XMSS and LMS. We also present a simple, efficient, and easy to implement protocol proven secure in this security model, i.e. the protocol guarantees at any point in time a consistent state among the parties in a distributed system, like a distributed security enclave. The security model is defined in the universal composabilty (UC) framework by Ran Canetti by providing an ideal functionality for the distributed key management for stateful signatures. Hence our protocol remains secure even if arbitrarily composed with other instances of the same or other protocols, a necessity for the security of distributed key management protocols. Our main application are security enclaves consisting of HSMs, but the model and the protocol can easily be adapted to other scenarios of distributed key management of stateful signature schemes.
Anamorphic Monero Transactions: the Threat of Bypassing Anti-Money Laundering Laws
In this paper, we analyze the clash between privacy-oriented cryptocurrencies and emerging legal frameworks for combating financial crime, focusing in particular on the recent European Union regulations. We analyze Monero, a leading "privacy coin" and a major point of concern for law enforcement, and study the scope of due diligence that must be exercised under the new law with regard to Monero trading platforms and how it translates to the technical capabilities of the Monero protocol. We both recognize flaws in the legislation and identify technical pitfalls threatening either the effective compliance of, say, Monero exchanges or the anonymity endeavour of Monero itself. Of independent interest is that we turn to anamorphic cryptography (marking one of the first practical applications of the concept) and leverage it to build a hidden transaction layer embedded in the Monero blockchain that obfuscates illegal money flow and circumvents transaction-level attempts at enforcing the EU law.
Quantum Resource Analysis of Low-Round Keccak/SHA-3 Preimage Attack: From Classical 2^ 57.8 to Quantum 2 ^28.9 using Qiskit Modeling
Uncategorized
Uncategorized
This paper presents a hardware-conscious analysis of the quantum acceleration of the
classical 3-round Keccak-256 preimage attack using Grover’s Algorithm. While the theo-
retical quantum speed-up from T cl ≈ 2 ^57.8 (classical) to T qu ≈ 2 ^28.9 (quantum) is mathe-
matically sound, the practical implementation overhead is so extreme that attacks remain
wholly infeasible in both resource and runtime dimensions. Using Qiskit-based circuit
synthesis, we derive that a 3-round Keccak quantum oracle requires:
• 9,600 Toffoli gates (with uncomputation for reversibility)
• 3,200 logical qubits (1,600 state + 1,600 auxiliary)
• 7.47 × 10 13 total 2-qubit gates (full Grover search)
• 3.2 million physical qubits (with quantum error correction) — PROHIBITIVE
• 0.12 years (43 days) to 2,365+ years execution time, depending on machine
assumptions
These barriers—particularly the physical qubit requirements, circuit depth, and error
accumulation—render the quantum attack infeasible for any foreseeable quantum computer.
Consequently, SHA-3 security is not threatened by quantum computers for preimage attacks.
We emphasize the critical importance of hardware-aware complexity analysis in quantum
cryptanalysis: the elegant asymptotic theory of Grover’s Algorithm hides an engineering
overhead so prohibitive that the quantum approach becomes infeasible from both resource
and implementation perspectives.
E2E-AKMA: An End-to-End Secure and Privacy-Enhancing AKMA Protocol Against the Anchor Function Compromise
The Authentication and Key Management for Applications (AKMA) system represents a recently developed protocol established by 3GPP, which is anticipated to become a pivotal component of the 5G standards. AKMA enables application service providers to delegate user authentication processes to mobile network operators, thereby eliminating the need for these providers to store and manage authentication-related data themselves. This delegation enhances the efficiency of authentication procedures but simultaneously introduces certain security and privacy challenges that warrant thorough analysis and mitigation.
The 5G AKMA service is facilitated by the AKMA Anchor Function (AAnF), which may operate outside the boundaries of the 5G core network. A compromise of the AAnF could potentially allow malicious actors to exploit vulnerabilities, enabling them to monitor user login activities or gain unauthorized access to sensitive communication content. Furthermore, the exposure of the Subscription Permanent Identifier (SUPI) to external Application Functions poses substantial privacy risks, as the SUPI could be utilized to correlate a user's real-world identity with their online activities, thereby undermining user privacy.
To mitigate these vulnerabilities, we propose a novel protocol named E2E-AKMA, which facilitates the establishment of a session key between the User Equipment (UE) and the Application Function (AF) with end-to-end security, even in scenarios where the AAnF has been compromised. Furthermore, the protocol ensures that no entity, aside from the 5G core network, can link account activities to the user's actual identity. This architecture preserves the advantages of the existing AKMA scheme, such as eliminating the need for complex dynamic secret data management and avoiding reliance on specialized hardware (apart from standard SIM cards). Experimental evaluations reveal that the E2E-AKMA framework incurs an overhead of approximately 9.4\% in comparison to the original 5G AKMA scheme, which indicates its potential efficiency and practicality for deployment.
Random-Access AEAD for Fast Lightweight Online Encryption
We study the problem of random-access authenticated encryption. In this setting, one wishes to encrypt (resp., decrypt) a large payload in an online matter, i.e., using a limited amount of memory, while allowing for the processing of plaintext (resp., ciphertext) segments to be in a random order. Prior work has studied online AE for in-order (streaming) encryption and decryption, and later work added additional constraints to support random access decryption. The result is complicated notions that are not built from the start to account for random access.
We thus provide a new, clean-state treatment to the random-access setting. We introduce random-access authenticated encryption (raAE) schemes, which captures AEAD that provides random-access encryption and decryption. We introduce formal security definitions for raAE schemes that cover confidentiality, integrity, and commitment. We prove relationships with existing notions, showing that our simpler treatment does not sacrifice achievable security. Our implications also result in the first treatment of commitment security for online AEAD as well, an increasingly important security goal for AEAD.
We then exercise our formalization with a practice-motivated case study: FIPS-compliant raAE. We introduce an raAE scheme called FLOE (Fast Lightweight Online Encryption) that is FIPS compliant, compatible with existing AES-GCM APIs that mandate random nonces, and yet can provide secure, random-access, committing encryption of orders of magnitude more data than naive approaches that utilize AES-GCM. FLOE was designed in close collaboration with leading cloud data platform Snowflake, where it will soon be used in production to protect sensitive data.
Post-Quantum Security of the Sum of Even-Mansour
The Sum of Even-Mansour (SoEM) construction was proposed by Chen et al. at Crypto 2019. This construction implements a pseudorandom permutation via the modular addition of two independent Even-Mansour structures and can spawn multiple variants by altering the number of permutations or keys. It has become the design basis for some symmetric schemes, such as the nonce-based encryption scheme CENCPP* and the nonce-based message authentication code scheme nEHTm. This paper provides a proof of the quantum security of the SoEM21 construction under the Q1 model: when an attacker has quantum access to the random permutations but only classical access to the keyed construction, the SoEM21 construction ensures security of up to \(n/3\) bits. This exactly matches the complexity \(O(2^{n/3})\) of the quantum key recovery attack in the Q1 model recently proposed by Li et al., thus establishing a tight bound.
Benchmarking SLH-DSA: A Comparative Hardware Analysis Against Classical Digital Signatures for Post-Quantum Security
The advent of large-scale quantum computers poses a fundamental threat to widely deployed public-key cryptographic schemes such as RSA and elliptic curve digital signatures. In response, the National Institute of Standards and Technology has standardized several post-quantum cryptographic algorithms, including the Stateless Hash-Based Digital Signature Algorithm (SLH-DSA) specified in FIPS 205. While SLH-DSA offers strong, conservative security guarantees based solely on cryptographic hash functions, its practical adoption depends on a clear understanding of its hardware cost and performance characteristics relative to classical standards.
This paper presents a unified hardware benchmarking study of SLH-DSA against RSA, DSA, ECDSA, and EdDSA. All algorithms are implemented at the register-transfer level in Verilog HDL and synthesized on the same Xilinx Artix-7 FPGA platform to ensure a fair comparison. The evaluation focuses on key hardware metrics, including logic utilization, memory usage, DSP consumption, operational latency, maximum clock frequency, and throughput for key generation, signing, and verification.
The results demonstrate that SLH-DSA is logic- and memory-intensive, with significantly higher signing latency and larger signature sizes compared to classical schemes. However, its verification performance is highly competitive, and its public key size remains extremely small. In contrast, classical schemes are primarily arithmetic-bound and rely heavily on DSP resources. The findings highlight that SLH-DSA represents a viable post-quantum solution for applications prioritizing long-term security assurance and efficient verification, such as firmware authentication and digital archiving, despite its higher signing cost.
High Exponents May Not Suffice to Patch AIM (On Attacks, Weak Parameters, and Patches for AIM2)
The growth of advanced cryptographic applications has driven the development of arithmetization-oriented (AO) ciphers over large finite fields, which are designed to minimize multiplicative complexity. However, this design advantage of AO ciphers could also serve as an attack vector. For instance, the \textsf{AIM} one-way function in the post-quantum signature \AIMer proposed at CCS 2023 has been broken by several works soon after its publication. The designers then promptly developed secure patches and proposed an enhanced version, \textsf{AIM2}, which was updated to the latest version of \AIMer that has been selected as one of the winners of the Korean PQC Competition in early 2025.
In this paper, we focus on the algebraic security of AIM2 over $\mathbb{F}_{2^n}$. First, we introduce a resultant-minimized model that reduces eliminations by using a non-$k$ based substitution strategy and linearized-polynomial decomposition,
achieving an attack time complexity of $2^{188.76}$ ($2^{195.05}$) primitive calls of \textsf{AIM2-III} when $\omega=2$ ($\omega=2.373$), indicating that the designers have been over-optimistic in the evaluation of their security margin; Second, we propose a subfield reduction technique for the case that exponents approach subfield extension sizes, equation degrees collapse sharply, \textit{e.g.,} the exponent $e_2=141\mapsto 13$ in \textsf{AIM2-V} when considering the subfield $\mathbb{F}_{2^{128}}$. This can lower the algebraic attack complexity to $2^{295.97}$ primitive calls at $\omega=2$, which improves upon designers' estimated bound of Gr\"{o}bner basis attack by about $2^{100}$. Besides, based on our attack methods, we have identified some weak parameter choices, which could provide concrete design guidance for \textsf{AIM2} construction, especially for the exponent of its Mersenne S-box. Finally, to address the potential vulnerabilities, we further propose \textsf{AIM2-patch} with a simple secure patch on \textsf{AIM2}, which can prevent key elimination, neutralize linearized-polynomial decomposition, and raise algebraic attack complexity, while incurring negligible overheads in \AIMer scheme.
An Efficient Private GPT Never Autoregressively Decodes
The wide deployment of the generative pre-trained transformer (GPT) has raised privacy concerns for both clients and servers. While cryptographic primitives can be employed for secure GPT inference to protect the privacy of both parties, they introduce considerable performance overhead. To accelerate secure inference, this study proposes a public decoding and secure verification approach that utilizes public GPT models, motivated by the observation that securely decoding one and multiple tokens takes a similar latency. The client uses the public model to generate a set of tokens, which are then securely verified by the private model for acceptance. The efficiency of our approach depends on the acceptance ratio of tokens proposed by the public model, which we improve from two aspects: (1) a private sampling protocol optimized for cryptographic primitives and (2) model alignment using knowledge distillation. Our approach improves the efficiency of secure decoding while maintaining the same level of privacy and generation quality as standard secure decoding. Experiments demonstrate a $2.1\times \sim 6.0\times$ speedup compared to standard decoding across three pairs of public-private models and different network conditions.
Nimbus: Secure and Efficient Two-Party Inference for Transformers
Transformer models have gained significant attention due to their power in machine learning tasks. Their extensive deployment has raised concerns about the potential leakage of sensitive information during inference. However, when being applied to Transformers, existing approaches based on secure two-party computation (2PC) bring about efficiency limitations in two folds: (1) resource-intensive matrix multiplications in linear layers, and (2) complex non-linear activation functions like $\mathsf{GELU}$ and $\mathsf{Softmax}$. This work presents a new two-party inference framework $\mathsf{Nimbus}$ for Transformer models. For the linear layer, we propose a new 2PC paradigm along with an encoding approach to securely compute matrix multiplications based on an outer-product insight, which achieves $2.9\times \sim 12.5\times$ performance improvements compared to the state-of-the-art (SOTA) protocol. For the non-linear layer, through a new observation of utilizing the input distribution, we propose an approach of low-degree polynomial approximation for $\mathsf{GELU}$ and $\mathsf{Softmax}$, which improves the performance of the SOTA polynomial approximation by $2.9\times \sim 4.0\times$, where the average accuracy loss of our approach is 0.08\% compared to the non-2PC inference without privacy. Compared with the SOTA two-party inference, $\mathsf{Nimbus}$ improves the end-to-end performance of BERT inference by $2.7\times \sim 4.7\times$ across different network settings.
ARION: Attention-Optimized Transformer Inference on Encrypted Data
Privacy-preserving Transformer inference (PPTI) is essential for deploying large language models (LLMs) such as BERT and LLaMA in sensitive domains. In these models, the attention mechanism is both the main source of expressiveness and the dominant performance bottleneck under fully homomorphic encryption (FHE), due to large ciphertext matrix multiplications and the softmax nonlinearity. This paper presents Arion, a non-interactive FHE-based PPTI protocol that specifically optimizes the computation of encrypted attention. First, for the three consecutive ciphertext matrix multiplications in multi-head attention, we introduce the double Baby-Step Giant-Step algorithm, which significantly reduces the number of ciphertext rotations. On BERT-Base, Arion achieves an 82.5% reduction in rotations over the state-of-the-art PPTI protocol MOAI (2025), corresponding to a 5.7x reduction in rotation cost. Second, we propose a linear–nonlinear fusion technique tailored to the softmax evaluation in attention. By decomposing softmax into shift-by-maximum, exponentiation, and reciprocal sub-steps and fusing them with the surrounding encrypted matrix operations, Arion enables efficient attention evaluation while remaining compatible with diverse ciphertext packing formats. We implement Arion using Lattigo and first evaluate attention kernels on popular LLMs including BERT-Tiny, BERT-Base, and LLaMA, confirming the practicality and scalability of the proposed optimizations for encrypted attention computation. For end-to-end applications, on classification tasks for several benchmark datasets, Arion attains accuracy comparable to plaintext inference and yields up to 2.5x end-to-end speedups over MOAI for BERT-Base.
HHGS: Forward-secure Dynamic Group Signatures from Symmetric Primitives
Group signatures allow a group member to sign messages on behalf of the group while preserving the signer’s anonymity, making them invaluable for privacy-sensitive applications. As quantum computing advances, post-quantum security in group signatures becomes essential. Symmetric primitives (SP) offer a promising pathway due to their simplicity, efficiency, and well-understood security foundations. In this paper, we introduce the first \textit{forward-secure dynamic group signature} (FSDGS) framework relying solely on SP. We begin with \textit{hierarchical hypertree group signatures} (HHGS), a basic scheme that securely organizes keys of one-time signatures (OTS) in a hypertree using puncturable pseudorandom functions to enable on-demand key generation and forward security, dynamic enrollment, and which provides resilience against attacks that exploit registration patterns by obfuscating the assignment and usage of keys. We then extend this foundation to HHGS^+, which orchestrates multiple HHGS instances in a generic way, significantly extending the total signing capacity to $O(2^{60})$, which outperforms HHGS's closest competitors while keeping signatures below 8 kilobytes. We prove the security of both schemes in the standard model. Our results outline a practical SP-driven pathway toward post-quantum-secure group signatures suitable for resource-constrained client devices.
Accelerating FrodoKEM in Hardware
FrodoKEM, a conservative post-quantum key encapsulation mechanism based on the plain Learning with Errors (LWE) problem, has been recommended for use by several government cybersecurity agencies and is currently undergoing standardization by the International Organization for Standardization (ISO). Despite its robust security guarantees, FrodoKEM's performance remains one of the main challenges to its widespread adoption. This work addresses this concern by presenting a fully standard-compliant, high-performance hardware implementation of FrodoKEM targeting both FPGA and ASIC platforms. The design introduces a scalable parallelization architecture that supports run-time configurability across all twelve parameter sets, covering three security levels (L1, L3, L5), two PRNG variants (SHAKE-based and AES-based), and both standard and ephemeral modes, alongside synthesis-time tunability through a configurable performance parameter to balance throughput and resource utilization. For security level L1 on Xilinx Ultrascale+ FPGA, the implementation achieves 3,164, 2,846, and 2,614 operations per second for key generation, encapsulation, and decapsulation, respectively, representing the fastest standard-compliant performance reported to date while consuming only 27.8K LUTs, 64 DSPs, and 8.1K flip-flops. These results significantly outperform all prior specification-compliant implementations and even surpass non-compliant designs that sacrifice specification adherence for speed. Furthermore, we present the first ASIC evaluation of FrodoKEM using the NANGATE45 45 nm technology library, achieving 7,194, 6,471, and 5,943 operations per second for key generation, encapsulation, and decapsulation, respectively, with logic area of 0.235 mm$^2$. The ASIC implementation exhibits favorable sub-linear area scaling and competitive energy efficiency across different performance parameter configurations, establishing a baseline for future comparative studies. The results validate FrodoKEM's practical viability for deployment in high-throughput, resource-constrained, and power-sensitive cryptographic applications, demonstrating that conservative post-quantum security can be achieved without compromising performance.
On the Pitfalls of Modeling Individual Knowledge
The concept of knowledge has been central in cryptography, especially within cryptographic proof systems. Traditionally, research in this area considers an abstract \emph{prover} defending a claim that it knows a message $M$. Recently, a stronger concept—termed ``individual'' (Dziembowski et al., CRYPTO'23) or ``complete'' (Kelkar et al., CCS'24) knowledge—has emerged. This notion ensures the prover physically stores $M$ on a machine that it controls. As we argue in the paper, this concept also appears in earlier work on ``non-outsourceable puzzles'' (Miller et al., CCS'15), which implicitly assumes that performing quickly complex computation on a string $M$ implies storing it on a single machine.
In this line of work, the authors typically rely on the algorithms whose computation requires a massive number of queries to a hash function $H$. This paper highlights a subtle issue in the modeling used in some of these papers, more concretely, the assumption that H can be modeled as an atomic random oracle on long messages. Unfortunately, this does not correspond well to how the hash functions are constructed in practice. For example, the real-world hash functions (e.g., Merkle-Damgard or sponge-based) allow partial evaluation on long inputs, violating this assumption. Another example is the hashing used in Bitcoin mining, which permits similar precomputation. This undermines some protocols relying on individual knowledge. We demonstrate practical attacks against Miller et al.'s and Kelkar et al.'s schemes based on this observation, and discuss secure alternatives. Our alternative constructions, which are modifications of the original ones, avoid reliance on the random oracle behavior of hash functions on long messages. In the full version of this paper, we will provide their formal security analysis in the individual cryptography model of Dziembowski et al. (CRYPTO'23).
How to Compare Bandwidth Constrained Two-Party Secure Messaging Protocols: A Quest for A More Efficient and Secure Post-Quantum Protocol
Transitioning existing classical two-party secure messaging protocols to post-quantum protocols has been an active movement in practice in recent years: Apple’s PQ3 protocol and the recent Triple Ratchet protocol being investigated by the Signal team with academics (Dodis et al. Eurocrypt’25). However, due to the large communication overhead of post-quantum primitives, numerous design choices non-existent in the classical setting are being explored, rendering comparison of secure messaging protocols difficult, if not impossible.
In this work, we thus propose a new pragmatic metric to measure how secure a messaging protocol is given a particular communication pattern, enabling a concrete methodology to compare secure messaging protocols. We uncover that there can be no “optimal” protocol, as different protocols are often incomparable with the respect to worst-case (adversarial) messaging behaviors, especially when faced with real-world
bandwidth constraints. We develop a comprehensive framework to experimentally compare various messaging protocols under given bandwidth limits and messaging behaviors. Finally, we apply our framework to compare several new and old messaging protocols. Independently, we also uncover untapped optimizations which we call opportunistic sending, leading to better post-quantum messaging protocols. To capture these optimizations, we further propose sparse continuous key agreement as a fundamental building block for secure messaging protocols, which could be of independent interest.
Breaking UOV Encryption: Key Recovery Attack On Olivier
Uncategorized
Uncategorized
The Oil and Vinegar (OV) trapdoor is widely used in signature schemes such as UOV and MAYO. Recently, Esposito et al. proposed OliVier, an encryption scheme based on this trapdoor. However, the OV trapdoor was originally designed for signatures, and adapting it to encryption introduces inherent challenges.
We identify two such challenges and analyze how OliVier addresses the first, while showing that the unresolved second challenge enables a practical key-recovery attack. We conclude that any scheme using the OV trapdoor for encryption must also solve this second problem, for which no efficient solution is currently known.
Masked Circuit Compiler in the Cardinal Random Probing Composability Framework
Designing practically secure masked circuits remains a central problem in the field of cryptographic implementation. While most masking schemes have been proven secure in the classical probing model, this model fails to capture more advanced side-channel attacks such as horizontal attacks. In recent years, the community has shifted toward the more realistic random probing model, which offers stronger guarantees. Yet, balancing strong security with practical efficiency continues to be a significant challenge.
In this work, we introduce new tools and constructions that significantly improve the design and analysis of random probing secure circuits. First, we formalize new security notions that combine the benefits of cardinal and general Random Probing Composability (RPC), two recently introduced notions enabling more flexible and efficient composition of secure gadgets. We then show how uniformly random permutations can be applied to transform any cardinal or general RPC gadget into a so-called uniformly cardinal RPC gadget, thereby enhancing security at low cost. Using these techniques, we propose the first non-linear multiplication gadget, inspired by the recursive construction from CHES 2016, that achieves concrete cardinal RPC security. We provide a detailed comparison with state-of-the-art multiplication gadgets in terms of both random probing advantage and implementation complexity. Building upon this gadget, we design a tighter random probing compiler that strategically uses permutations to improve security bounds while preserving efficiency. Finally, we apply our compiler to the AES and demonstrate improved performance and security compared to existing methods.
PRGUE Schemes: Efficient Updatable Encryption With Robust Security From Symmetric Primitives
Securing sensitive data for long-term storage in the cloud is a challenging problem.
Updatable encryption (UE) enables changing the encryption key of encrypted data in the cloud while the plaintext and all versions of the key remain secret from the cloud storage provider, making it an efficient alternative for companies that seek to outsource their data storage.
The most secure UE schemes to date follow robust security models, such as the one by Boyd et al. from CRYPTO 2020, and rely exclusively on asymmetric cryptography, thus incurring a substantial performance cost.
In contrast, the Nested UE construction of Boneh et al. from ASIACRYPT 2020 achieves much better efficiency with symmetric cryptography, but it provides weaker security guarantees.
Boyd et al. further suggest that attaining robust UE security inherently requires the use of asymmetric cryptography.
In this work, we show for the first time that symmetric UE schemes are not inherently limited in their security and can achieve guarantees on par with, and even beyond, Boyd’s UE model.
To this end, we extend Boyd’s framework to encompass the class of ciphertext-dependent UE schemes and introduce indistinguishability-from-random (IND\$) as a stronger refinement of indistinguishability.
While our IND\$ notion primarily streamlines the proofs of advanced security properties within the model, it yields practical privacy advantages: ciphertexts do not exhibit a recognizable structure that could otherwise distinguish them from arbitrary data.
We then introduce two robustly secure symmetric UE constructions, tailored to different target security levels.
Our schemes are built on a novel design paradigm that combines symmetric authenticated encryption with ciphertext re-randomization, leveraging for the first time the use of pseudorandom number generators in a one-time-pad style. This approach enables both robust security and high efficiency, including in AES-based implementations.
Our first scheme, PUE-List, delivers encryption up to 600× faster than prior asymmetric schemes of similar robustness, while matching Boneh et al.’s efficiency and achieving the stronger security level of Boyd et al.
Our second scheme, PUE-One, further boosts performance with constant-time decryption 24× faster than all previously known UE schemes, overcoming the main bottleneck in Boneh’s design, while trading off some security, yet still significantly surpassing the guarantees of Boneh’s Nested scheme.
Language-Agnostic Detection of Computation-Constraint Inconsistencies in ZKP Programs via Value Inference
Zero-knowledge proofs (ZKPs) allow a prover to convince a verifier of a statement's truth without revealing any other information. In recent years, ZKPs have matured into a practical technology underpinning major applications. However, implementing ZKP programs remains challenging, as they operate over arithmetic circuits that encode the logic of both the prover and the verifier. Therefore, developers must not only express the computations for generating proofs, but also explicitly specify the constraints for verification. As recent studies have shown, this decoupling may lead to critical ZKP-specific vulnerabilities.
Unfortunately, existing tools for detecting them are limited, as they:
(1) are tightly coupled to specific ZKP languages,
(2) are confined to the constraint level, preventing reasoning about the underlying computations,
(3) target only a narrow class of bugs,
and (4) suffer from scalability bottlenecks due to reliance on SMT solvers.
To address these limitations, we propose a language-agnostic formal model, called the Domain Consistency Model (DCM), which captures the relationship between computations and constraints. Using this model, we provide a taxonomy of vulnerabilities based on computation-constraint mismatches, including novel subclasses overlooked by existing models. Next, we implement a lightweight automated bug detection tool, called CCC-Check, which is based on abstract interpretation. We evaluate CCC-Check on a dataset of 20 benchmark programs. Compared to the SoTA verification tool CIVER, our tool achieves a 100-1000$\times$ speedup, while maintaining a low false positive rate. Finally, using the DCM, we examine six widely adopted ZKP projects and uncover 15 previously unknown vulnerabilities. We reported these bugs to the projects' maintainers, 13 of which have since been patched. Of these 15 vulnerabilities, 12 could not be captured by existing models.
Leakage-Resilient Multi-Party Computation: Protecting the Evaluator in Circuits Garbling
Uncategorized
Uncategorized
Garbling schemes allow two parties to compute a
joint function on private inputs without revealing them. Yet,
a semi-honest garbler might exploit hardware/software sidechannel
leakages from the evaluator. An alarming threat with
no concrete solution yet. Using the homomorphic properties of
ElGamal encryption, we can prevent such leakage-based attacks.
Completing Policy-based Anonymous Tokens: Private Bits, Public Metadata and more...
Anonymous tokens are cryptographic protocols for restricting the access to online resources to eligible users. After proving eligibility to the token issuer, the client receives a set of tokens. Later, it can prove eligibility to a resource provider by sending one of the tokens received from the issuer. The anonymous token protocol ensures that the resource provider cannot link received tokens to their issuance, even if it colludes with the token issuer. Recently, Faut et al. (EuroS\&P’25) introduced the concept of policy-based anonymous tokens, in which an issuer provides a single pre-token to a client, who can locally derive multiple tokens according to a publicly announced policy. The major advantage of policy-based tokens is that the communication complexity of the issuance phase is constant. While the work of Faut et al. constitutes a promising step in a new direction, their protocol still lacks several desirable properties known from standard anonymous tokens -- most notably, the ability to bind a pre-token and all tokens derived from it to a private metadata bit or a publicly known metadata string.
In this work, we present a new framework for policy-based anonymous token schemes in the random oracle model. Our framework includes two concretely practical constructions -- one based on equivalence class signatures and one on algebraic MACs -- as well as a communication-optimized, though less practical, construction based on zkSNARKs. All three constructions can be configured to support private metadata bits, public metadata, or both. We formalize the notion of policy-based anonymous tokens with a private metadata bit and public metadata, and we prove security of the two primary constructions: the equivalence-class-signature-based scheme and the algebraic-MAC-based scheme. Finally, we provide an experimental evaluation and comparison of all our constructions alongside the most relevant related work. Our results demonstrate that our two primary constructions achieve significant efficiency improvements over the scheme of Faut et al., both in terms of computation communication.
More NTRU+Sign Signatures from Cyclotomic Trinomials
Recently, $\mathsf{NTRU}$+$\mathsf{Sign}$ was proposed as a new compact signature scheme, following `Fiat-Shamir with Aborts' (FSwA) framework. Its compactness is mainly based on their novel NTRU-based key structure that fits well with bimodal distributions in the FSwA framework. However, despite its compactness, $\mathsf{NTRU}$+$\mathsf{Sign}$ fails to provide a diverse set of parameters that can meet some desired security levels. This limitation stems from its reliance on a ring $\mathbb{Z}_q[x]/\langle x^n+1 \rangle$, where $n$ is restricted to powers of two, limiting the flexibility in selecting appropriate security levels. To overcome this limitation, we propose a revised version of $\mathsf{NTRU}$+$\mathsf{Sign}$ by adopting a ring $\mathbb{Z}_q[x]/\langle x^n-x^{n/2}+1\rangle$ from cyclotomic trinomials, where $n=2^{i}3^{j}$ for some positive integers $i$ and $j$. Our parameterization offers three distinct security levels: approximately $120$, $190$, and $260$ bits, while preserving the compactness in $\mathbb{Z}_q[x]/\langle x^n+1 \rangle$. We implement these re-parameterized $\mathsf{NTRU}$+$\mathsf{Sign}$ schemes, showing that the performance of $\mathsf{NTRU}$+$\mathsf{Sign}$ from cyclotomic trinomials is still comparable to previous lattice-based signature schemes such as $\mathsf{Dilithium}$ and $\mathsf{HAETAE}$.
On the Provable Dual Attack for LWE by Modulus Switching
As a theoretical cornerstone of post-quantum cryptography, the Learning With Errors (LWE) problem serves as the security foundation for standardized algorithms such as Kyber and Dilithium. Recently, a framework for provable dual attacks on LWE has been proposed by Pouly et al. in Eurocrypt 2024, addressing the limitations in effectiveness caused by existing methods' reliance on heuristic assumptions in LWE dual attacks. Their paper also poses an open problem on how to formally integrate modulus switching into this framework to reduce attack costs. The main purpose of this paper is to give a solution of this open problem by presenting an improved provable dual attack method that incorporates modulus switching and Chinese Remainder Theorem (CRT) techniques. First, we design a modulus switching mechanism that eliminates practical errors via the Poisson summation formula. By embedding the inherent noise from modulus switching into a rational lattice framework, our approach effectively preventing the risk of attack failure caused by the merging of such errors with LWE noise. Theoretical guarantees (Theorems \ref{main_theorem} and \ref{main_theorem_entire}) rigorously quantify the parameter ranges for successful attacks. Second, we introduce a CRT-based secret recovery method that aggregates partial secrets from independent sub-attacks. By leveraging the Chinese Remainder Theorem to reconstruct full secrets from congruence relations, our method adapts to arbitrary secret distributions. Furthermore, by using a tighter variant of Banaszczyk's measure inequality, we obtain a precise parameter range for the dual attack's efficacy through rigorous mathematical proof, and achieve the same complementary gap with the contradictory regime (proposed by Ducas et al.) as in Pouly et al.'s work. Experiments show $4$-$5$ bits superior performance in attack estimation compared to the original framework.
Certified-Everlasting Quantum NIZK Proofs
We study non-interactive zero-knowledge proofs (NIZKs) for NP satisfying: 1) statistical soundness, 2) computational zero-knowledge and 3) certified-everlasting zero-knowledge (CE-ZK). The CE-ZK property allows a verifier of a quantum proof to revoke the proof in a way that can be checked (certified) by the prover. Conditioned on successful certification, the verifier's state can be efficiently simulated with only the statement, in a statistically indistinguishable way. Our contributions regarding these certified-everlasting NIZKs (CE-NIZKs) are as follows:
- We identify a barrier to obtaining CE-NIZKs in the CRS model via generalizations of known interactive proofs that satisfy CE-ZK.
- We circumvent this by constructing CE-NIZK from black-box use of NIZK for NP satisfying certain properties, along with OWFs. As a result, we obtain CE-NIZKs for NP in the CRS model, based on polynomial hardness of the learning with errors (LWE) assumption.
- In addition, we observe that the aforementioned barrier does not apply to the shared EPR model. Consequently, we present a CE-NIZK for NP in this model based on any statistical binding hidden-bits generator, which can be based on LWE. The only quantum computation in this protocol involves single-qubit measurements of the shared EPR pairs.
Towards Lightweight CKKS: On Client Cost Efficiency
Fully homomorphic encryption (FHE) enables clients with small devices to securely delegate their computations to powerful servers. However, to delegate these computations, a client should generate and transmit several gigabytes of FHE keys to the server. Reducing the size of FHE keys without compromising efficiency is therefore highly desirable, particularly for applications involving mobile and IoT devices.
In this work, we propose two new key management systems, $\textsf{KG}^{+}$ and $\textsf{BTS}^{+}$, which reduce the client-side cost of FHE. The $\textsf{KG}^{+}$ system significantly reduces the key size without any compromise in the efficiency of homomorphic computation compared to the state-of-the-art key management system by Lee-Lee-Kim-No [Asiacrypt 2023]. The $\textsf{BTS}^{+}$ system further reduces the key size at the expense of granularity in homomorphic computation.
Our systems rely on a new ring switching technique for keys that bridges FHE keys with different parameters. This technique decouples the rings of FHE keys during transmission and computation. In our approach, the client generates and sends ``transmission keys'' with size-optimal parameters, and the server generates ``evaluation keys'' with computation-optimal parameters.
We evaluate the communication costs of $\textsf{KG}^{+}$ and $\textsf{BTS}^{+}$ on the tasks studied in Lee-Lee-Kim-No. The transmission key sizes for CKKS bootstrapping with ring dimension $2^{16}$ are $325$--$609$ MB for $\textsf{KG}^{+}$ and $285$ MB for $\textsf{BTS}^{+}$. These are $3.09$--$4.37\times$ and $3.51$--$9.30\times$ smaller than Lee-Lee-Kim-No, respectively. For the secure ResNet-20 inference, the key sizes for $\textsf{KG}^{+}$ and $\textsf{BTS}^{+}$ are $325$--$609$ MB and $285$ MB, respectively. These are $3.95$--$5.73\times$ and $4.53$--$12.25\times$ smaller than Lee-Lee-Kim-No.
TSS-PV: Traceable Secret Sharing with Public Verifiability
High-value custodial systems require both Public Verifiability (PVSS) to
audit key distribution and Traceability (TSS) to identify insider leakage
via black-box ``reconstruction boxes.'' Existing schemes achieve one
property but not both, leaving practical systems exposed to either
undetectable dealer misbehavior or untraceable share leakage. Combining these properties introduces the ``Provenance Paradox'': a verifiability-aware reconstruction box with access to verification predicates and public transcripts can reject dummy shares used for tracing because they have no provenance in the public transcript.
We present TSS-PV, the first publicly verifiable traceable secret sharing
scheme that resolves this paradox. Our key insight is to inject
indistinguishable dummy shares during the sharing phase itself, ensuring
they are committed to the public transcript before any reconstruction box
is constructed. We formalize syntax and security under a modular
adversarial model: public verifiability holds against fully malicious
dealers and parties; traceability identifies leaking parties after honest
distribution; and non-imputability prevents a malicious dealer from
framing honest parties. Both tracing properties assume a verifiability-aware
(perfect) reconstruction box.
We instantiate TSS-PV over cyclic groups using Schnorr-based NIZKs and a
recent generic tracing framework (CRYPTO'24). Public verification costs
scale linearly in the number of parties; tracing costs are quadratic. A
Curve25519 prototype on commodity hardware demonstrates practicality: for
$32\text{ - }256$ parties, distribution verification completes in
$14\text{ - }127$ ms, tracing in $0.24\text{ - }76$ s, and trace
verification in $0.15\text{ - }25$ s.
Tight Generic PRF Security of HMAC and NMAC
HMAC and its variant NMAC are among the most widely used methods for keying a cryptographic hash function to obtain a PRF or a MAC. Yet, even after nearly three decades of research, their generic PRF security still remains poorly understood, where the compression function of the underlying hash function is treated as a black box and accessible to the adversary. Although a series of works have exploited compression function queries to mount generic attacks, proving tight bounds on the generic PRF security of HMAC and NMAC remains a challenging open question until now.
In this paper, we establish tight bounds on the generic PRF security of HMAC and NMAC. Our bounds capture the influence of the number of construction queries, the number of compression function queries, and the maximal block length of a message on their security. The proofs are carried out in the multi-user setting and the bounds hold regardless of the number of users. In addition, we present matching attacks to demonstrate that our bounds are essentially tight. Taken together, our results close a longstanding gap in the generic PRF security analysis of HMAC and NMAC.
HQC Beyond the Standard: Ciphertext Compression and Refined DFR Analysis
Hamming Quasi-Cyclic (HQC), recently selected by NIST for standardization, does not employ ciphertext compression, unlike its lattice-based counterpart Kyber. In lattice-based encryption, ciphertext compression is a standard post-processing step, typically implemented through coefficient-wise rounding. In contrast, analogous methods have not yet been explored in code-based cryptography. We address this gap by developing techniques to reduce ciphertext sizes in schemes defined over the Hamming metric, with a particular focus on HQC.
To support this approach, the decryption failure rate (DFR) analysis is generalized. Specifically, we revisit the modeling of the error that must be correctable with probability $2^{-\lambda}$ to achieve $\lambda$ bits of security; previously, only tractable under an independence assumption. We propose a more accurate model of the error distribution, which takes dependencies between the coefficients into account. Confirmed by extensive simulations, the proposed model sharpens the DFR analysis and, hence, our understanding of the security of HQC.
Building on this generalized framework, we present a ciphertext compression mechanism that enables a precise DFR analysis and is therefore transparent with respect to security. This is achieved by carefully designing a quantization code with a direct-product structure, aligned with HQC's error-correcting code. For the parameters proposed in the round 4 submission, our techniques reduce HQC ciphertext sizes by up to 4.7%; a proof-of-concept implementation confirms that this improvement comes without noticeable loss in efficiency. Reductions of up to 10% are achievable through a trade-off with public-key size.
Quasi-perfect (de)compression of elliptic curve points in the highly $2$-adic scenario
In this short note, a new laconic, secure, and efficient (de)compression method is provided for points of any elliptic curve over any highly $2$-adic finite field of large characteristic. Such fields are ubiquitous in modern elliptic curve cryptography, whereas they severely slow down the conventional $x$-coordinate (de)compression technique. In comparison with the current state of the art in the given research direction, the new method requires neither complicated mathematical formulas nor conditions on the curve, nor reliable pseudo-random generators. Thereby, the present work is universal and much more implementation-friendly, which justifies its existence, despite the absence of interesting mathematics behind it.
On the Equivalence of Polynomial Commitments for an Identical Polynomial under Different Bases
We propose a Pairing-based Polynomial Consistency Protocol (PPCP) that verifies the equivalence of polynomial commitments generated under different basis representations, such as the coefficient and Lagrange bases. By leveraging pairing relations, PPCP proves that two commitments correspond to an identical underlying polynomial vector without revealing the polynomial itself. This enables efficient proof aggregation and recursive composition across heterogeneous SNARK systems that adopt distinct polynomial encodings.
Stateless 2PC Signatures for Internet-Scale Authentication and Authorization
The industry is moving away from passwords for authentication and authorization, with hardware devices for storing long-term cryptographic keys emerging as the leading alternative. However, these devices often have limited displays and remain vulnerable to theft, malware, or tricking users into signing malicious payloads. Current systems provide little fallback security in such cases. Any solution must also meet strict requirements: compatibility with industry standards, scalability to handle high request volumes, and high availability.
We present a novel design for authentication and authorization that meets these demands. Our approach virtualizes the authenticating/authorizing party via a two-party signing protocol with a helper entity, ensuring that keys remain secure even if a device is compromised and that every signed message conforms to a security policy.
We formalize the required properties for such protocols and show how they are met by existing schemes (e.g., FROST for Schnorr, Boneh–Haitner–Lindell-Segev'25 for ECDSA). Motivated by the widespread use of ECDSA (FIDO2/Passkeys, blockchains), we introduce a new, optimized two-party ECDSA protocol that is significantly more efficient than prior work. At its core is a new variant of exponent-VRF, improving on earlier constructions and of independent interest. We validate our design with a proof-of-concept virtual authenticator for the FIDO2 Passkeys framework.
\textsc{Npir}: High-Rate PIR for Databases with Moderate-Size Records
Private information retrieval (PIR) is a widely used technique in privacy-preserving applications that enables users to retrieve records from a database without revealing any information about their queries. This study focuses on a type of PIR that has a high ratio between the size of the record retrieved by the client and the server's response. Although significant progress has been made in high-rate PIR in recent years, the computational overhead on the server side remains rather high. This results in low server throughput, particularly for applications involving databases with moderate-size records (i.e. tens of kilobytes), such as private advertising system.
In this paper, we present \textsc{Npir}, a high-rate single-server PIR that is based on NTRU encoding and outperforms the state-of-the-art Spiral (Menon \& Wu, S\&P 2022) and NTRUPIR (Xia \& Wang, EuroS\&P 2024) in terms of server throughput for databases with moderate-size records. In specific, for databases ranging from 1 GB to 32 GB with 32 KB records, the server throughput of \textsc{Npir} is 1.50 to 2.84 times greater than that of Spiral and 1.77 to 2.55 times greater than that of NTRUPIR.
To improve server throughput without compromising the high-rate feature, we propose a novel tool called NTRU packing, which compresses the constant terms of underlying polynomials of multiple NTRU encodings into a single NTRU encoding, thereby reducing the size of the server's response. Furthermore, \textsc{Npir} naturally supports batch processing for moderate-size records, and can easily handle retrieving for records of varying sizes.tions, we advance secure communication protocols under challenging conditions.
Scalable Private Set Intersection over Distributed Encrypted Data
Finding intersections across sensitive data is a core operation in many real-world data-driven applications, such as healthcare, anti-money laundering, financial fraud, or watchlist applications. These applications often require large-scale collaboration across thousands or more independent sources, such as hospitals, financial institutions, or identity bureaus, where all records must remain encrypted during storage and computation, and are typically outsourced to dedicated/cloud servers. Such a highly distributed, large-scale, and encrypted setting makes it very challenging to apply existing solutions, e.g., (multi-party) private set intersection (PSI) or private membership test (PMT).
In this paper, we present Distributed and Outsourced PSI (DO-PSI), an efficient and scalable PSI protocol over outsourced, encrypted, and highly distributed datasets. Our key technique lies in a generic threshold fully homomorphic encryption (FHE) based framework that aggregates equality results additively, which ensures high scalability to a large number of data sources. In addition, we propose a novel technique called \textit{nonzero-preserving mapping}, which maps a zero vector to zero and preserves nonzero values. This allows homomorphic equality tests over a smaller base field, substantially reducing computation while enabling higher-precision representations. We implement DO-PSI and conduct extensive experiments, showing that ours substantially outperforms existing methods in both computation and communication overheads. Our protocol handles a billion-scale set distributed and outsourced to a thousand data owners within one minute, directly reflecting large-scale deployment scenarios, and achieves up to an 11.16$\times$ improvement in end-to-end latency over prior state-of-the-art methods.
LPG: Raise Your Location Privacy Game in Direct-to-Cell LEO Satellite Networks
Multi-tenant direct-to-cell (D2C) Low Earth Orbit (LEO) satellite networks pose significant risks to users’ location privacy by linking Mobile Network Operator (MNO)- managed identities with Satellite Network Operator (SNO)- visible locations. Existing privacy solutions are ill-suited to the resource-constrained hardware and orbital dynamics of these satellite environments. We present LPG (Location Privacy Game), the first protocol-layer solution offering user-configurable location privacy for D2C LEO. LPG achieves this via identity-location decoupling: SNOs provide connectivity without visibility of user identity, while MNOs manage service and billing without access to precise location information. LPG enables offline secure authentication and key agreement without revealing user identity to satellites, supports user-configurable location disclosure at chosen geographic granularity for essential service needs, and ensures fair billing between MNOs and SNOs through privacy-preserving settlement. Our implementation on a real-world in-orbit LEO satellite and commercial mobile phones demonstrates that LPG is practical and viable in resource-constrained, highly-dynamic LEO environments.
Designing QC-MDPC Public Key Encryption Schemes with Niederreiter's Construction and a Bit Flipping Decoder with Bounded DFR
Post-quantum public key encryption (PKE) schemes employing Quasi-cyclic (QC) sparse
parity-check matrix codes are enjoying significant success, thanks to their
good performance profile and reduction to believed-hard problems from coding
theory.
However, using QC sparse parity-check matrix codes (i.e., QC-MDPC/LDPC codes)
comes with a significant challenge: determining in closed-form their decoding
failure rate (DFR), as decoding failures are known to leak information on the
private key.
Furthermore, there is no formal proof that changing the (constant) rate of the
employed codes does not change the nature of the underlying hard problem, nor
of the hardness of decoding random QC codes is formally related to the
decoding hardness of random codes.
In this work, we address and solve these challenges, providing a novel
closed-form estimation of the decoding failure rate for three-iteration bit
flipping decoders, and proving computational equivalences among the
aforementioned problems.
This allows us to design systematically a Niederreiter-style QC-MDPC PKE,
enjoying the flexibility granted by freely choosing the code rate, and the
significant improvements in tightness of our DFR bound.
We report a $2\times$ improvement in public key and ciphertext size
w.r.t. the previous best cryptosystem design with DFR closed-form bounds,
LEDAcrypt-KEM. Furthermore, we show that our PKE parameters yield $30$% smaller
public key size and $2.6\times$ smaller ciphertexts w.r.t. HQC,
which is the key encapsulation method employing a code based PKE, recently selected by the US NIST for standardization.
Multi-Party Private Join
A multi-party private join (MPPJ) protocol enables multiple source parties to provide a receiver party with the inner joins over their respective datasets, while revealing as little information as possible. There is currently no protocol that directly and efficiently enables such a MPPJ beyond the two- or three-party setting. The presently known protocols either achieve weaker functionality (e.g., multi- party private set intersection protocols) or more general ones (e.g., private-join-compute and generic secure multi-party computation protocols) and are therefore more costly to run for the sources. This work formally introduces MPPJ as an explicit goal, and proposes an efficient, helper-assisted protocol that achieves 𝑛-party inner joins with small leakage and close-to-optimal overhead for the sources. Specifically, for 𝑛 databases with 𝑚 rows, it requires only a single 𝑂 (𝑚) upload from the sources to the helper, and a single 𝑂 (𝑛 · 𝑚) download from the helper to the receiver. Moreover, the helper is entirely oblivious: it enables the efficiency and simplicity goals we are striving for, but it does not learn anything about the computation it facilitates. We formally model and prove the security of our protocol from standard assumptions, in the passive-adversary model. Then, we provide an open-source implementation and an extensive performance evaluation. According to our experiments, our protocol requires 1.02 to 20 times less communication than a current private-join-compute protocol (with no computation over the join) for 2 to 6 parties and input database sizes from 1.5K to 250K records. Finally, we demonstrate the versatility of our approach by extending our protocol to threshold-joins.
Efficient Privacy-Preserving Blueprints for Threshold Comparison
Privacy-Preserving Blueprints (PPBs), introduced by Kohlweiss et al. in in EUROCRYPT 2023, offer a method for balancing user privacy and bad-actor detection in private cryptocurrencies. A PPB scheme allows a user to append a verifiable escrow to their transactions which reveals some identifying information to an authority in the case that the user misbehaved. A natural PPB functionality is for escrows to reveal user information if the user sends an amount of currency over a certain threshold. However, prior works constructing PPBs for such a functionality have severe limitations when it comes to efficiency: escrows are either computationally infeasible to compute, or too large to be plausibly stored on a large-scale distributed ledger. We address these gaps by constructing a space and computation-efficient PPB for threshold comparison, producing escrows under 2kb that can be computed in seconds. The scheme can be instantiated using well-known cryptographic primitives, namely variants of the ElGamal encryption scheme and generic non-interactive zero-knowledge proofs. As an additional contribution, we implement one of the theoretical generic PPB constructions originally proposed by Kohlweiss et al. and find that it performs surprisingly well in practice. For the threshold comparison functionality it requires approximately 14kb escrows, and can be computed in around 12 seconds.
Bootstrapping (T)FHE Ciphertexts via Automorphisms: Closing the Gap Between Binary and Gaussian Keys
The GINX method in TFHE enables low-latency ciphertext bootstrapping with relatively small bootstrapping keys but is limited to binary or ternary key distributions. In contrast, the AP method supports arbitrary key distributions, albeit at the cost of significantly large bootstrapping keys. Building on AP, automorphism-based methods, introduced in LMK⁺ (EUROCRYPT 2023), achieve smaller key sizes. However, each automorphism application necessitates a key switch, introducing additional computational overhead and noise accumulation.
This paper advances automorphism-based methods in two important ways. First, it proposes a novel traversal blind rotation algorithm that optimizes the number of key switches for a given key material. Second, it introduces a new external product that is automorphism-parametrized and seamlessly applies an automorphism to one of the input ciphertexts. Together, these techniques substantially reduce the number of key switches, resulting in faster bootstrapping and improved noise control. As an independent contribution, we introduce a comprehensive theoretical framework for analyzing the expected number of automorphism key switches. The predictions of this framework perfectly align with the results of extensive numerical experiments, demonstrating its practical relevance.
In typical settings, by leveraging additional key material, the LLW⁺ approach (TCHES 2024) reduces the number of key switches by 17% compared to LMK⁺. Our combined techniques achieve a 46% reduction using similar key material and can eliminate an arbitrary large number (e.g., more than 99%) of key switches with only a moderate (9x) increase in key material size. As a result, the total bootstrapping runtime decreases by more than 34%.
Strong Designated Verifier Signatures with Non-delegatability from CSIDH
Abstract. Designated verifier signature allows a signer to designate a verifier who can verify the signature. A strong designated verifier signature (SDVS) enhances privacy by ensuring that the signature itself does not leak information about the signer’s identity to anyone other than the designated verifier. Non-delegatability is a property, as it prevents the signer’s ability to generate valid signatures from being delegated to others. This property is important for SDVS applications such as e-voting. To date, post-quantum SDVS schemes with non-delegatability have been proposed. These schemes are lattice-based or hash-based schemes. While isogeny-based SDVS schemes have been proposed, none of the existing works provide a proof of non-delegatability.
In this paper, we present the first isogeny-based SDVS scheme with a formal proof of non-delegatability. Our construction uses the quadratic twists of elliptic curves. The security of our scheme is proven under the commutative supersingular isogeny gap Diffie–Hellman assumption and the group action inversion problem assumption in the random oracle model.
Bridging Keyword PIR and Index PIR via MPHF and Batch PIR
This paper presents a Keyword Private Information Retrieval (Keyword PIR) scheme that achieves a constant-factor online computation and communication overhead compared to the underlying Index PIR, bridging the gap between Keyword PIR and Index PIR, and enabling efficient and privacy-preserving queries over diverse databases. We introduce a new Key-Value Store (KVS) instantiate by Minimal Perfect Hash Function, referred to as MPHF-KVS, in which each keyword query requires only a single index query. We then develop a generic Batch PIR framework that converts Index PIR into Keyword PIR using KVS encoding.
In particular, when the KVS is instantiated using a Binary Fuse Filter (BFF-KVS), Keyword PIR can be reduced to Batch PIR. Leveraging the updatable hint structure of PIR with side information, we propose a novel {Rewind \& Skip} technique that enables the execution of multiple queries within a single round.
In MPHF-KVS, the online computation and communication costs are at most $2\times$ those of Index PIR. In our Batch PIR with BFF-KVS, building upon three recent PIR schemes with sublinear server-side online computation and communication cost and without extra hint store, our approach inherits their advantages and achieves keyword query costs of less than $7\times$ the cost of an index query, while still maintaining sublinear online complexity.
On the Traceability of Group Signatures: Uncorrupted User Must Exist
Group signatures (GS) are a well-known cryptographic primitive that provide both anonymity and traceability. Several implication results have been established, mainly focusing on different security levels of anonymity. For example, fully anonymous GS implies public-key encryption (PKE), and selfless anonymous GS can be constructed from one-way functions and non-interactive zero-knowledge proofs, among others. In this paper, we explore a winning condition for full traceability: an adversary must produce a valid group signature whose opening reveals an uncorrupted user. We present a generic construction of GS that is secure in the Bellare-Micciancio-Warinschi (BMW) model, except for the above condition, based solely on PKE. We emphasize that the proposed construction is highly artificial and practically meaningless because the verification algorithm always outputs 1 regardless of the input. This result suggests that the winning condition is essential for full traceability, namely that an uncorrupted user must exist.
Revisiting Sum-check-based Polynomial Commitment Schemes
The sum-check protocol is a fundamental building block in succinct arguments. However, its security formalization is often tightly coupled with the larger protocol in which it is embedded, making modular design and analysis challenging. To address this limitation, we introduce \emph{functional proof systems (FPS)}, generalizing interactive proof systems by viewing the verifier as a function parameterized by the prover, and defining security by asserting properties of this function. This novel perspective not only enables a clean, self-contained security definition for sum-check, but also opens possibility for designing and formalizing security of protocols that prove statements dynamically generated during the protocol---both tasks are difficult for traditional proof systems.
We develop a framework for composing multiple FPSs by executing them in parallel, and analyzing security of the composite protocol. We apply this framework to analyze existing protocols, including BaseFold, a popular polynomial commitment scheme, and BulletProofs, a well-known inner product argument, providing more modular and simpler security proofs than their original analysis. Particularly for BaseFold, our security proof avoids the need for introducing a new variant of correlated agreement theorem, thus building its security directly on the well-studied theorems of FRI. Finally, we construct a new transparent, pairing-free, doubly efficient, and homomorphically additive polynomial commitment scheme by composing existing protocols, demonstrating the practical utility of our framework for designing novel cryptographic schemes.
LatticeFold: A Lattice-based Folding Scheme and its Applications to Succinct Proof Systems
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure and require a large (256-bit) field. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Since LatticeFold can operate over a small (64-bit) field, our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing plausible post-quantum security. Moreover, LatticeFold operates over the same module structure used by fully homomorphic encryption (FHE) and lattice signatures schemes, and can therefore benefit from software optimizations and custom hardware designed to accelerate these lattice schemes.
Release the Power of Rejected Signatures: An Efficient Side-Channel Attack on the ML-DSA Cryptosystem
The module-lattice-based digital signature standard, formerly known as CRYSTALS-DILITHIUM, is a lattice-based post-quantum cryptographic scheme. In August 2024, the National Institute of Standards and Technology officially standardized ML-DSA under FIPS 204. ML-DSA generates one valid signature and multiple rejected signatures during a single signing process. Most side-channel attacks targeting ML-DSA have focused solely on the valid signature, while largely neglecting the hints contained in rejected signatures. Building on prior SASCA frameworks originally proposed for ML-DSA, in this paper we present an efficient and fully practical instantiation of a private-key recovery attack on ML-DSA that jointly exploits side-channel leakages from both valid and rejected signatures within a unified factor graph. This concrete instantiation maximizes the information extracted from a single signing attempt and minimizes the number of required traces for full key recovery.
We conducted a proof-of-concept experiment with both reference and ASM-optimized implementations on a Cortex-M4 core chip, where the results demonstrate that incorporating rejected signatures reduces the required number of traces by at least $50.0\%$ for full key recovery. Moreover, we show that using only rejected signatures suffices to recover the key with fewer than $30$ traces under our setup. Our findings highlight that protecting rejected signatures is crucial, as their leakage provides valuable side-channel information. We strongly recommend implementing countermeasures for rejected signatures during the signing process to mitigate potential threats.
Universally Composable On-Chain Quadratic Voting for Liquid Democracy
Decentralized governance plays a critical role in blockchain communities, allowing stakeholders to shape the evolution of platforms such as Cardano, Gitcoin, Aragon, and MakerDAO through distributed voting on proposed projects in order to support the most beneficial of them. In this context, numerous voting protocols for decentralized decision-making have been developed, enabling secure and verifiable voting on individual projects (proposals). However, these protocols are not designed to support more advanced models such as quadratic voting (QV), where the voting power, defined as the square root of a voter’s stake, must be distributed among the selected by voter projects. Simply executing multiple instances of a single-choice voting scheme in parallel is insufficient, as it can not enforce correct voting power splitting. To address this, we propose an efficient blockchain-based voting protocol that supports liquid democracy under the QV model, while ensuring voter privacy, fairness and verifiability of the voting results. In our scheme, voters can delegate their votes to trusted representatives (delegates), while having the ability to distribute their voting power across selected projects. We model our protocol in the Universal Composability framework and formally prove its UC-security under the Decisional Diffie–Hellman (DDH) assumption. To evaluate the performance of our protocol, we developed a prototype implementation and conducted performance testing. The results show that the size and processing time of a delegate’s ballot scale linearly with the number of projects, while a voter’s ballot scales linearly with both the number of projects and the number of available delegation options. In a representative setting with 64 voters, 128 delegates and 128 projects, the overall traffic amounts to approximately 2.7 MB per voted project, confirming the practicality of our protocol for modern blockchain-based governance systems.
One-Step Schnorr Threshold Identification
Threshold cryptographic primitives have not been widely adopted in real-world distributed systems (i.e., beyond the closed-committee model), presumably due to synchronization overhead and complex certification processes for the shareholders. These are both aspects of their extended reliance on infrastructure, an assumption that is usually glossed over in their design.In this work, we propose $\textsf{OSST}$, a Schnorr-based real-time threshold identification protocol that is non-interactive and does not rely on public shares. Given a Shamir $(n, t)$-shared secret $x$, the proposed protocol allows any $t^* \ge t$ (but no less) shareholders to dynamically prove over designated communication channels that their secret keys interpolate to $x$ without revealing any information beyond that. Provers do not engage in distributed computations, sending their packets to the verifier asynchronously; conversely, verifiers need only know the combined public key $y \equiv g ^ x$, without need to pre-validate and register the individual member identities. The protocol is intended for use in permissionless and unmanaged meshes that lack overlay networks and trust infrastructure, a use case space that has been tacitly neglected as "niche" by threshold cryptography. No auditable or certified multi-key setup is required beyond distributing $x$ according to Shamir's secret sharing (or equivalent distributed key generation scheme) and correctly advertising its public counterpart; in particular, the protocol is intended to be secure against impersonation attacks without relying on the consistency of any advertised shares. We provide evidence that this has good chances to hold true by giving a formal security proof in the random oracle model under the one-more discrete-logarithm ($\textsf{OMDL}$) hardness assumption.
Learning from Leakage: Database Reconstruction from Just a Few Multidimensional Range Queries
Searchable Encryption (SE) has shown a lot of promise towards enabling secure and efficient queries over encrypted data. In order to achieve this efficiency, SE inevitably leaks some information, and a big open question is how dangerous this leakage is. While prior reconstruction attacks have demonstrated effectiveness in one-dimensional settings, extending them to high-dimensional datasets remains challenging. Existing methods either demand excessive query information (e.g. an attacker that has observed all possible responses) or produce low-quality reconstructions in sparse databases. In this work, we present REMIN, a new leakage-abuse attack against SE schemes in multi-dimensional settings, based on access and search pattern leakage from range queries. Our approach leverages unsupervised representation learning to transform query co-occurrence frequencies into geometric signals, allowing the attacker to infer relative spatial relationships between records. This enables accurate and scalable reconstruction of high-dimensional datasets under minimal leakage. We begin with a passive adversary that persistently observes all encrypted queries and responses, and later extend our analysis to an more active attacker capable of poisoning the dataset. Furthermore, we introduce REMIN-P, a practical variant of the attack that incorporates a poisoning strategy. By injecting a small number of auxiliary anchor points REMIN-P significantly improves reconstruction quality, particularly in sparse or boundary regions. We evaluate our attacks extensively on both synthetic and real-world structured datasets. Compared to state-of-the-art reconstruction attacks, our reconstruction attack achieves up to 50% reduction in mean squared error (MSE), all while maintaining fast and scalable runtime. Our poisoning attack can further reduce MSE by an additional 50% on average, depending on the poisoning strategy.
Beyond Incentive Compatibility: Rational Harm-Proof Transaction Fee Mechanisms
On a blockchain, users compete for scarce block space in an auction run by the miner to get their transactions confirmed in the block. This auction is called transaction fee mechanism (TFM). Recent work [Rou21, CS23, SCW23] has been focused on incentive compatibility (IC), requiring that honest behavior maximizes the payoff for each type of strategic player: users, the miner, or miner–user coalitions. In this work, we introduce rational-harm proofness (RHP), which rules out any deviation that harms honest parties without also reducing the deviator’s own utility. RHP closes a gap left by IC: IC does not forbid utility neutral yet externally harmful deviations. For example, in a second-price auction, the second-highest bidder can increase the winner’s payment without affecting their own payoff. Such deviation is eliminated by RHP.
We characterize TFMs satisfying RHP alongside incentive compatibility for users (UIC) and miners (MIC). For finite block size, we develop a complete characterization in two models:
- In the plain model —where a single miner unilaterally implements the auction—we prove a tetrilemma (3-out-of-4 impossibility): among the four desired properties positive miner revenue, UIC, MIC, RHP against miner–user coalitions, no mechanism achieves all four simultaneously. Meanwhile, any three are jointly achievable in the plain model.
- In the MPC-assisted model —where a committee of miners jointly implement the auction via multi-party computation (MPC)—we construct a randomized TFM with a positive miner revenue that achieves UIC, MIC, and RHP against all three types of strategic players. We further show that randomness is necessary: any deterministic TFM satisfying UIC and RHP in this model must confirm no transactions when the number of users exceeds the block size.
Finally, we show that IC and RHP are incomparable: for each strategic role, there are mechanisms satisfying one but not the other in both models. Our results broaden the design objectives for TFMs: beyond incentive compatibility, mechanisms should also preclude costless harm to honest participants
M&M: Secure Two-Party Machine Learning through Efficient Modulus Conversion and Mixed-Mode Protocols (Full Version)
Secure two-party machine learning has made substantial progress through the use of mixed-mode protocols. Despite these advancements, existing approaches often suffer from efficiency bottlenecks due to the inherent mismatch between the optimal domains of various cryptographic primitives, such as Homomorphic Encryption and Oblivious Transfer. In response to these challenges, we introduce the \tNAME{} framework, which features an efficient modulus conversion protocol. This breakthrough enables seamless integration of the most suitable cryptographic subprotocols within their optimal modulus domains, allowing linear computations to be executed over a prime modulus and nonlinear computations over a two-power modulus, with a minimal modulus conversion overhead. We further establish new benchmarks for the performance of fundamental primitives, namely comparison and multiplication, across various two-party techniques, together with our practical optimizations to improve efficiency. By incorporating these techniques, \tNAME{} demonstrates significant performance enhancements over state-of-the-art solutions: \romannumeral1) we report a $6\times$-$100\times$ improvement for approximated truncations with 1-bit error tolerance; \romannumeral2) an average of $5\times$ (resp. $4\times$) reduction in communication (resp. runtime) for machine learning functions; \romannumeral3) and a 25\%-99\% improvement in cost-efficiency for private inference of deep neural networks and 50\% improvement in private training of gradient boosting decision trees.
Too Easy Fault Injection Attacks on Learning with Rounding (LWR)
We present an extend-and-prune fault Injection
attack on serial implementations of Learning With Rounding
that drop the least-significant bits. By iteratively isolating and
recovering progressively larger key portions via faults, the attack
recovers the secret key.
An Extended PUF-based Protocol
We extend a PUF-based authentication protocol with
key refresh, hierarchical groups, and revocation. Our framework
enables secure communication among enrolled devices without
server interaction, allowing group leaders to derive subordinate
keys and the server to exclude compromised parties through
controlled key updates.
Swarm in EM Hay: Particle Swarm-guided Probe Placement for EM SCA
Despite decades of research in electromagnetic (EM) side-channel analysis (SCA), practical attacks still require manual effort and domain expertise to identify informative probe locations on target devices.
Existing approaches rely heavily on exhaustive grid scanning or handcrafted alignment, limiting attack scalability and realism.
In this work, we present the first automated and adaptive EM SCA framework that uses particle swarm optimization (PSO) to navigate the probe.
Particles are guided by mutual information (MI) leakage maps, enabling efficient recovery of secret-dependent emissions.
We introduce a novel application of the Nyström approximation to accelerate MI estimation across EM trace windows, allowing real-time swarm guidance without full kernel computations.
Unlike prior work, our method requires no leakage templates, manual tuning, or alignment assistance, enabling automated attacks with minimal assumptions.
We validate our framework on both microcontroller and FPGA platforms running AES-128.
PSO-guided scanning identifies high-leakage points faster than grid search and reduces the number of traces required for successful CPA-based key recovery by up to a factor of 16, i.e., saving tens of thousands of traces.
Efficient Algorithms for $\mathbb{G}_2$ Subgroup Membership testing on Pairing-friendly Curves
Subgroup membership testing serves as a crucial countermeasure against small subgroup attacks, thereby ensuring the security of pairing-based cryptographic protocols. Despite its vital importance, the expensive computational requirements for membership testing on specific pairing-friendly curves pose a non-negligible challenge.
In this paper, we revisit the $\mathbb{G}_2$ membership testing algorithms on KSS16 curves and propose a novel approach specifically designed for the families constructed by the KSS method (Kachisa-Schaefer-Scott method).
Moreover, we generalize several previous methods for $\mathbb{G}_2$ membership testing, rendering them applicable to more generic pairing-friendly curves.
Specifically, we implement an efficient $\mathbb{G}_2$ membership testing on three well-known curves KSS16-329, KSS16-330, and KSS16-766 for verification. The experimental results illustrate that our new method achieves improvements of $24.0\%$, $33.3\%$, and $29.2\%$ in terms of clock cycles compared to the state-of-the-art, respectively.
Time Memory Trade-off For Enumeration
We incorporate a meet-in-the-middle strategy into the enumeration algorithm, enabling a tunable time-memory trade-off. The algorithm can achieve a minimum asymptotic time complexity of \(n^{n/(4e) + o(n)}\), which, in return, demands memory of the same order. This represents a square-root improvement over the state-of-the-art enumeration algorithms. More generally, our approach attains a time complexity of~$n^{cn/(2e) + o(n)}$ with memory usage \(n^{(1-c)n/(2e) + o(n)}\), where $c$ is any constant satisfying \(\frac{1}{2} \leq c < 1\).
Our approach decouples the head and tail blocks of the lattice basis. For a properly selected parameter, each enumeration space becomes asymptotically the square root of the original search space. Each tail vector is then extended to the head block space to find its closest vectors using an efficient neighboring search algorithm. Among all pairs of neighboring vectors that we iterate through, the shortest difference vector is then the solution to the Shortest Vector Problem (SVP).
Apart from the exact version of the algorithm which is of theoretical interest, we also propose heuristic strategies to improve the practical efficiency. First, we show the adaptation of our algorithm to pruned enumeration. Then we show that with a particularly chosen backbone lattice (rescaled~\(\mathbb{Z}^n\)), we are able to accelerate the neighboring search process to an extremely efficient degree. Finally, we optimize parameters and give a practical cost estimation to show how much acceleration we could bring using this new algorithm.