[go: up one dir, main page]

What a lovely hat

Is it made out of tin foil?

Papers updated in last 7 days (90 results)

Last updated:  2025-09-21
qedb: Expressive and Modular Verifiable Databases (without SNARKs)
Vincenzo Botta, Simone Bottoni, Matteo Campanelli, Emanuele Ragnoli, and Alberto Trombetta
Verifiable Databases (VDBs) let clients delegate storage to an untrusted provider while maintaining the ability to verify query results. Since databases are foundational and storage delegation is increasingly common, VDBs address a critical need. Existing VDB designs face several limitations: approaches based on general-purpose proof systems (e.g., SNARKs) offer high expressivity but at the cost of cumbersome intermediate representations, heuristic assumptions, and heavy cryptographic machinery, whereas schemes built from specialized authenticated data structures (ADS), such as accumulators, are simpler and rely on well-founded assumptions, yet they support only restricted queries and often incur unacceptable overheads (e.g., a quadratic overhead in the number of the database columns). We present $\mathsf{qedb}$, a new construction that advances the state of the art on verifiable databases from ADS:it is the most expressive scheme to date and the first with proof length completely independent of the database size; it has no quadratic dependence on the number of columns. $\mathsf{qedb}$ is deployment-oriented: it is performant and simple to implement and analyze; it relies on well-founded assumptions; it can be easily made post-quantum secure (using lattice-based instantiations). One of our primary contribution is a modular, foundational framework separating the information-theoretic logic of a VDB from its cryptographic instantiations. We show how to realize it using pairing-based set accumulators and linear-map vector commitments (which we introduces as a technique), and more generally show that extractable homomorphic polynomial commitments suffice. Our Rust implementation of $\mathsf{qedb}$ scales to DBs with millions of rows. Its proving and verification times are competitive against SNARK-based constructions and improve on ADS-based solutions.
Last updated:  2025-09-21
An Efficient and Secure Boolean Function Evaluation Protocol
Sushmita Sarkar, Vikas Srivastava, Tapaswini Mohanty, Nibedita Kundu, Sumit Kumar Debnath, and Pantelimon Stanica
The secure evaluation of Boolean functions or Secure Boolean Evaluation (SBE) is an important area of research. SBE allows parties to jointly compute Boolean functions without exposing their private inputs. SBE finds applications in privacy-preserving protocols and secure multi-party computations. The existing schemes need at least (1.5λ + 5)-bit strings to garble an AND gate with security parameter λ, where λ is the length of arbitrary bit strings associated with 0 and 1. In this manuscript, we discuss how we circumvent the communication cost of non-input gates independent of λ. We present an efficient and generic two-party protocol (namely BooleanEval) for the secure evaluation of Boolean functions by utilizing a 1-out-of-2 Oblivious Transfer (OT) as a building block. We introduce randomized Bloom filter (namely, RBFS ) associated with a set S to compute non-input AND gates. BooleanEval only employs hash evaluations and XOR operations as the core computational step, thus making it lightweight and fast. Unlike other lightweight state-of-the-art designs of SBE, BooleanEval avoids the use of additional cryptographic primitives, such as commitment schemes to reduce the computational overhead.
Last updated:  2025-09-21
Strong Designated Verifier Signatures with Non-delegatability from CSIDH
Hiroki Minamide, Keisuke Tanaka, and Masayuki Tezuka
Abstract. Designated verifier signature allows a signer to designate a verifier who can verify the signature. A strong designated verifier signature (SDVS) enhances privacy by ensuring that the signature itself does not leak information about the signer’s identity to anyone other than the designated verifier. Non-delegatability is a property, as it prevents the signer’s ability to generate valid signatures from being delegated to others. This property is important for SDVS applications such as e-voting. To date, post-quantum SDVS schemes with non-delegatability have been proposed. These schemes are lattice-based or hash-based schemes. While isogeny-based SDVS schemes have been proposed, none of the existing works provide a proof of non-delegatability. In this paper, we present the first isogeny-based SDVS scheme with a formal proof of non-delegatability. Our construction uses the quadratic twists of elliptic curves. The security of our scheme is proven under the commutative supersingular isogeny gap Diffie–Hellman assumption and the group action inversion problem assumption in the random oracle model.
Last updated:  2025-09-20
On the Estonian Internet Voting System, IVXV, SoK and Suggestions
Shymaa M. Arafat
The Estonian i-voting experience is probably the richest to analyze; a country that is considered a pioneer in digitizing both the government and private sector since 2001 followed by online internet voting (i-voting) in 2005. However, there are still some complaints submitted, critics and remarks to consider about the IVXV system. In this paper, we introduce a Systemization of Knowledge of the Estonian IVXV i-voting system and propose some added security enhancements. The presented SoK discusses applications implemented by election observers in 2023 & 2024 elections, which, to our knowledge, have never been mentioned and/or analyzed in the academia before. We also point out to unnoticed automated formal verification analysis of IVXV; the researchers discovered a privacy attack that we show extendable to a possible large scale encrypted vote copying. In addition, we identify and analyze recent fixes and improvements in the June 2024 version used in the European Parliament elections connecting them to their academic sources. Finally, we discuss the current system status, propose our own suggestions to some remaining vulnerabilities, then raise the inevitable question of the approaching quantum threat.
Last updated:  2025-09-20
ORQ: Complex Analytics on Private Data with Strong Security Guarantees
Eli Baum, Sam Buxbaum, Nitin Mathai, Muhammad Faisal, Vasiliki Kalavri, Mayank Varia, and John Liagouris
We present ORQ, a system that enables collaborative analysis of large private datasets using cryptographically secure multi-party computation (MPC). ORQ protects data against semi-honest or malicious parties and can efficiently evaluate relational queries with multi-way joins and aggregations that have been considered notoriously expensive under MPC. To do so, ORQ eliminates the quadratic cost of secure joins by leveraging the fact that, in practice, the structure of many real queries allows us to join records and apply the aggregations “on the fly” while keeping the result size bounded. On the system side, ORQ contributes generic oblivious operators, a data-parallel vectorized query engine, a communication layer that amortizes MPC network costs, and a dataflow API for expressing relational analytics—all built from the ground up. We evaluate ORQ in LAN and WAN deployments on a diverse set of workloads, including complex queries with multiple joins and custom aggregations. When compared to state-of-the-art solutions, ORQ significantly reduces MPC execution times and can process one order of magnitude larger datasets. For our most challenging workload, the full TPC-H benchmark, we report results entirely under MPC with Scale Factor 10—a scale that had previously been achieved only with information leakage or the use of trusted third parties.
Last updated:  2025-09-20
The Syndrome-Space Lens: A Complete Resolution of Proximity Gaps for Reed-Solomon Codes
Russell Okamoto
We resolve the Correlated Agreement (CA) problem for Reed-Solomon codes up to the information-theoretic capacity limit by introducing a fundamental change of basis: from the traditional evaluation domain to the syndrome space. Viewed through this “Syndrome-Space Lens,” the problem of proximity testing transforms into a transparent question of linear-algebraic geometry: a single affine line of syndromes traversing a family of low-dimensional subspaces. This new perspective makes a sharp phase transition at the capacity boundary visible, allowing for a complete characterization of the problem's behavior across all parameter regimes, yielding short, self-contained proofs. Classification. We establish a precise trichotomy organized by the rank margin $\Delta := t-d$. At the capacity boundary ($\Delta=0$), the CA premise is information-theoretically vacuous, and we prove that no rigidity can be concluded without imposing additional structure. One step beyond capacity ($\Delta=1$), the problem enters a “knife-edge” regime where unconditional rigidity does not hold; soundness is recovered either through a combinatorial witness (such as a repeated error support or a small union of supports) or by adding protocol-level structure (such as independent two-fold MCA checks, DEEP/STIR out-of-domain sampling, or a global error locator). For stricter gaps ($\Delta\ge 2$), unconditional rigidity holds under a simple algebraic condition ($(r{+}1)k<m{+}1$), with explicit quantitative bounds. MCA and Practical Implications. Below capacity ($\delta<1-\rho$), the strengthened mutual correlated agreement (MCA) problem reduces to ordinary correlated agreement. MCA holds under the same hypotheses as CA. When folds are generated with independent challenges (e.g., via domain-separated Fiat-Shamir), the per-round security margins add. The model-scoped soundness law is $\Pr[\mathrm{FA}] \le q^{-(\sum \Delta_i) s}$, providing a clear and complete rulebook for selecting safe and efficient parameters in FRI/STARK systems. This work bypasses the complex machinery of list-decoding algorithms entirely and resolves the long-standing open problem concerning the gap between the Johnson bound and capacity.
Last updated:  2025-09-20
Accelerating FHEW-like Bootstrapping via New Configurations of the Underlying Cryptosystems
Han Wang, Ming Luo, Han Xia, Mingsheng Wang, and Hanxu Hou
This work introduces a new configuration of the GSW fully homomorphic encryption (FHE) (Gentry, Sahai, Waters~Crypto 2013), with a squared gadget ,batching and scale-based homomorphic operation. This configuration offers improved efficiency compared to existing approaches. By utilizing our proposed method as the underlying building block, we can accelerate FHEW-like bootstrapping implementations, including the libraries of FHEW and TFHE. We conduct comprehensive experiments to evaluate the concrete performance of our method, demonstrating improvements of more than 2 times faster. For example, the current ring GSW under OpenFHE takes 84 ms and TFHE takes 11.4 ms, while our approach achieves 26.2 ms and 4.8 ms, respectively. These improvements have significant implications for the practical aspects of FHE, enhancing real-world usability.
Last updated:  2025-09-20
Information-Theoretic Broadcast-Optimal MPC
Michele Ciampi, Ivan Damgård, Divya Ravi, Luisa Siniscalchi, and Sophia Yakoubov
Broadcast, though often used as a black box in cryptographic protocols, is expensive to realize in terms of rounds and communication complexity. We investigate the minimal use of broadcast in round-optimal information-theoretic MPC, with statistical security. For information-theoretic MPC with guaranteed output delivery, four rounds of communication are necessary and sufficient (Applebaum, Kachlon and Patra, FOCS 2020; Applebaum, Kachlon and Patra, STOC 2023). We show that broadcast is unavoidable in the second and third rounds of statistical MPC protocols. To complement our lower bounds, we modify the protocol of Applebaum, Kachlon and Patra (STOC 2023) to make use of broadcast only in the second and third round. Along the way, we show that the sharing phase of any three-round information-theoretic VSS protocol must also make use of broadcast in the second and third rounds.
Last updated:  2025-09-20
Improved algorithms for ascending isogeny volcanoes, and applications
Steven Galbraith, Valerie Gilchrist, and Damien Robert
Given two elliptic curves over F_q, computing an isogeny mapping one to the other is conjectured to be classically and quantumly hard. This problem plays an important role in the security of elliptic curve cryptography. In 2024, Galbraith applied recently developed techniques for isogenies to improve the state-of-the-art for this problem. In this work, we focus on computing ascending isogenies with respect to an orientation. Our results apply to both ordinary and supersingular curves. We give a simplified framework for computing self-pairings, and show how they can be used to improve upon the approach from Galbraith to recover these ascending isogenies and eliminate a heuristic assumption from his work. We show that this new approach gives an improvement to the overall isogeny recovery when the curves have a small crater (super-polynomial in size). We also study how these self-pairings affect the security of the (PEARL)SCALLOP group action, gaining an improvement over the state-of-the-art for some very particular parameter choices. The current SCALLOP parameters remain unaffected.
Last updated:  2025-09-20
The zkVot Protocol: A Distributed Computation Protocol for Censorship Resistant Anonymous Voting
Yunus Gürlek and Kadircan Bozkurt
zkVot is a client side trustless distributed computation protocol that utilizes zero knowledge proving technology. It is designed to achieve anonymous and censorship resistant voting while ensuring scalability. The protocol is created as an example of how modular and distributed computation can improve both the decentralization and the scalability of the internet. A complete and working implementation of this paper is available on https://github.com/node101-io/zkvot. It is important to emphasize that zkVot is one of the first complete implementations of a fully censorship resistant anonymous voting application that can scale up to a meaningful number of voters.
Last updated:  2025-09-20
The Semantic Holder (SH): Algebraic Extraction for Legal Opposability
MINKA MI NGUIDJOI Thierry Emmanuel
This manuscript introduces Semantic Holder (SH), the opposability primitive within the Chaotic Affine Secure Hash (CASH) toolkit, completing the framework’s implementation of the Q2CSI philosophy. SH enables legally opposable interpretations through algebraic extraction from polynomial iteration traces, working in concert with CEE (confidentiality) and AOW (reliability). Building upon the Affine Iterated Inversion Problem (AIIP) foundation, SH provides mathematically verifiable legal interpretations with guaranteed minimum opposability bounds. We establish that SH maintains an opposability score Ω ≥ 0.60 through rigorous entropy preservation, institutional explainability, and legal contestability guarantees. The primitive features efficient STARK-proof verifiable computation, cross-jurisdictional compatibility, and quantum resistance through its reduction to AIIP hardness. We demonstrate practical applications in legal smart contracts, regulatory compliance auditing, and digital evidence authentication, providing concrete parameter recommendations for standard security levels. SH represents a significant advancement in cryptographic systems that must operate within legal constraints, enabling transparent and verifiable legal opposability without compromising security or performance.
Last updated:  2025-09-19
Is It Even Possible? On the Parallel Composition of Asynchronous MPC Protocols
Ran Cohen, Pouyan Forghani, Juan Garay, Rutvik Patel, and Vassilis Zikas
Despite several known idiosyncrasies separating the synchronous and the asynchronous models, asynchronous secure multi-party computation (MPC) protocols demonstrate high-level similarities to synchronous MPC, both in design philosophy and abstract structure. As such, a coveted, albeit elusive, desideratum is to devise automatic translators (e.g., protocol compilers) of feasibility and efficiency results from one model to the other. In this work, we demonstrate new challenges associated with this goal. Specifically, we study the case of parallel composition in the asynchronous setting. We provide formal definitions of this composition operation in the UC framework, which, somewhat surprisingly, have been missing from the literature. Using these definitions, we then turn to charting the feasibility landscape of asynchronous parallel composition. We first prove strong impossibility results for composition operators that do not assume knowledge of the functions and/or the protocols that are being composed. These results draw a grim feasibility picture, which is in sharp contrast with the synchronous model, and highlight the question: Is asynchronous parallel composition even a realistic goal? To answer the above (in the affirmative), we provide conditions on the composed protocols that enable a useful form of asynchronous parallel composition, as it turns out to be common in existing constructions.
Last updated:  2025-09-19
SUMMER: Recursive Zero-Knowledge Proofs for Scalable RNN Training
Yuange Li and Xiong Fan
Zero-knowledge proofs of training (zkPoT) enable a prover to certify that a model was trained on a committed dataset under a prescribed algorithm without revealing the model or data. Proving recurrent neural network (RNN) training is challenging due to hidden-state recurrence and cross-step weight sharing, which require proofs to enforce recurrence, gradients, and nonlinear activations across time. We present SUMMER (SUMcheck and MERkle tree), a recursive zkPoT for scalable RNNs. SUMMER generates sumcheck-based proofs that backpropagation through time (BPTT) was computed correctly over a quantized finite field, while nonlinearities such as $\tanh$ and softmax are validated by lookup arguments. Per-step commitments and proofs are folded with Merkle trees, yielding a final commitment and a succinct proof whose size and verification time are independent of the number of iterations. SUMMER offers (i) the first end-to-end zkPoT for RNN training, including forward, backward, and parameter updates; (ii) the first use of LogUp for nonlinear operations with a batched interface; and (iii) efficient recursive composition of lookup and sumcheck proofs. On a Mini-Char-RNN with 12M parameters, the prover runs in 70.1 seconds per iteration, $8.5\times$ faster and $11.6\times$ more memory efficient than the IVC baseline, with 165 kilobyte proofs verified in 20 milliseconds.
Last updated:  2025-09-19
TACITA: Threshold Aggregation without Client Interaction
Varun Madathil, Arthur Lazzaretti, Zeyu Liu, and Charalampos Papamanthou
Secure aggregation enables a central server to compute the sum of client inputs without learning any individual input, even in the presence of dropouts or partial participation. This primitive is fundamental to privacy-preserving applications such as federated learning, where clients collaboratively train models without revealing raw data. We present a new secure aggregation protocol, TACITA, in the single-server setting that satisfies four critical properties simultaneously: (1) one-shot communication from clients with no per-instance setup, (2) input-soundness, i.e. the server cannot manipulate the ciphertexts, (3) constant-size communication per client, independent of the number of participants per-instance, and (4) robustness to client dropouts Previous works on secure aggregation - Willow and OPA (CRYPTO'25) that achieve one-shot communication do not provide input soundness, and allow the server to manipulate the aggregation. They consequently do not achieve full privacy and only achieve Differential Privacy guarantees at best. We achieve full privacy at the cost of assuming a PKI. Specifically, TACITA relies on a novel cryptographic primitive we introduce and realize: succinct multi-key linearly homomorphic threshold signatures (MKLHTS), which enables verifiable aggregation of client-signed inputs with constant-size signatures. To encrypt client inputs, we adapt the Silent Threshold Encryption (STE) scheme of Garg et al. (CRYPTO 2024) to support ciphertext-specific decryption and additive homomorphism. We formally prove security in the Universal Composability framework and demonstrate practicality through an open-source proof-of-concept implementation, showing our protocol achieves scalability without sacrificing efficiency or requiring new trust assumptions.
Last updated:  2025-09-19
ChipmunkRing: A Practical Post-Quantum Ring Signature Scheme for Blockchain Applications
Dmitrii A. Gerasimov
We introduce ChipmunkRing, a practical post-quantum ring signature construction tailored for blockchain environments. Building on our Chipmunk lattice-based cryptographic framework, this implementation delivers compact digital signatures ranging from 20.5 to 279.7KB, with rapid signing operations completing in 1.1-15.1ms and efficient validation processes requiring only 0.4-4.5ms for participant groups of 2-64 members. The cornerstone of our approach is Acorn Verification—a streamlined zero-knowledge protocol that supersedes the classical Fiat-Shamir methodology. This innovation enables linear O(n) authentication complexity using concise 96-byte cryptographic proofs per participant, yielding a remarkable 17.7× performance enhancement for 32-member rings when compared to conventional techniques. Our work includes preliminary security estimates indicating approximately 112-bit security against known quantum algorithms (NIST Level 5, see Section~\ref{sec:quantum-security}), extensive computational benchmarking, and comprehensive support for both standard anonymity sets and collaborative threshold constructions with flexible participation requirements.
Last updated:  2025-09-19
Kani's lemma from Clifford algebra
Tomoki Moriya
In 1997, Kani proved Kani's lemma, which asserts that a commutative diagram of four $g$‑dimensional abelian varieties induces an isogeny between product abelian varieties of dimension $2g$, in counting the number of genus-$2$ curves admitting two distinct elliptic subcovers. In these years, Kani’s lemma plays a fundamental role in isogeny-based cryptography: Kani’s lemma has found numerous cryptographic applications, including both cryptanalysis and protocol construction. However, direct investigation into the lemma itself remains scarce. In this work, we propose a generalization of Kani’s lemma. We present a novel formulation that, given a commutative diagram of $2^{n+1}$ abelian varieties of dimension $g$, yields an isogeny of dimension $2^{n}g$. We further establish a connection between this generalized lemma and the theory of Clifford algebras, using the latter as a foundational tool in our construction. To exemplify our framework, we explicitly construct the resulting $2^{n}g$‑dimensional isogenies for the cases $n=1,2,3$. The cases of $n=2,3$ provide nontrivial generalizations of the original Kani's lemma. This generalization is expected to have novel applications in the fields of both mathematics and cryptography.
Last updated:  2025-09-19
Security Amplification of Threshold Signatures in the Standard Model
Karen Azari, Cecilia Boschini, Kristina Hostáková, and Michael Reichle
The current standardization calls for threshold signatures have highlighted the need for appropriate security notions providing security guarantees strong enough for broad application. To address this, Bellare et al. [Crypto'22] put forward a hierarchy of unforgeability notions for threshold signatures. Recently, Navot and Tessaro [Asiacrypt'24] introduced a new game-based definition of strong (one-more) unforgeability for threshold signatures, which however does not achieve Bellare's strongest level of security. Navot and Tessaro analyzed several existing schemes w.r.t. their strong unforgeability security notion, but all positive results rely on idealized models. This is in contrast to the weaker security notion of (standard) unforgeability, for which standard-model constructions exist. This leaves open a fundamental question: is getting strong unforgeability fundamentally harder than standard unforgeability for threshold signatures? In this paper we bridge this gap, by showing a generic construction lifting any unforgeable threshold signature scheme to strong unforgeability. The building blocks of our construction can be instantiated in the standard model under standard assumptions. The achieved notion of strong unforgeability extends the definition of Navot and Tessaro to achieve the strongest level of security according to the hierarchy of Bellare et al. (following a recent classification of security notions for (blind) threshold signatures by Lehmann, Nazarian, and Özbay [Eurocrypt'25]). The starting point for our transformation is an existing construction for single-user signatures from chameleon hash functions by Steinfeld, Pieprzyk and Wang [RSA'07]. We first simplify their construction by relying on a stronger security notion for chameleon hash functions. The bulk of our technical contribution is then to translate this framework into the threshold setting. Towards this goal, we introduce a game-based definition for threshold chameleon hash functions (TCHF) and provide a construction of TCHF that is secure under DLOG in the standard model. We believe that our new notion of TCHF might also be of independent interest.
Last updated:  2025-09-19
Data Anonymisation with the Density Matrix Classifier
David Garvin, Mattia Fiorentini, Oleksiy Kondratyev, and Marco Paini
We propose a new data anonymisation method based on the concept of a quantum feature map. The main advantage of the proposed solution is that a high degree of security is combined with the ability to perform classification tasks directly on the anonymised (encrypted) data resulting in the same or even higher accuracy compared to that obtained when working with the original plain text data. This enables important usecases in medicine and finance where anonymised datasets from different organisations can be combined to facilitate improved machine learning outcomes utilising the combined dataset. Examples include combining medical diagnostic imaging results across hospitals, or combining fraud detection datasets across financial institutions. We use the Wisconsin Breast Cancer dataset to obtain results on Rigetti's quantum simulator and Ankaa-3 quantum processor. We compare the results with classical benchmarks and with those obtained from an alternative anonymisation approach using a Restricted Boltzmann Machine to generate synthetic datasets. Finally, we introduce concepts from the theory of quantum magic to optimise the circuit ansatz and hyperparameters used within the quantum feature map.
Last updated:  2025-09-19
Ipotane: Balancing the Good and Bad Cases of Asynchronous BFT
Xiaohai Dai, Chaozheng Ding, Hai Jin, Julian Loss, and Ling Ren
State-of-the-art asynchronous Byzantine Fault Tolerance (BFT) protocols integrate a partially-synchronous optimistic path. Their ultimate goal is to match the performance of a partially-synchronous protocol in favorable situations and that of a purely asynchronous protocol in unfavorable situations. While prior works have excelled in favorable situations, they fall short when conditions are unfavorable. To address these shortcomings, a recent work, Abraxas (CCS'23), retains stable throughput in all situations but incurs very high worst-case latency in unfavorable situations due to slow detection of optimistic path failures. Another recent work, ParBFT (CCS'23) ensures good latency in all situations but suffers from reduced throughput in unfavorable situations due to the use of extra Asynchronous Binary Agreement (ABA) instances. We propose Ipotane, a protocol that attains performance comparable to partially-synchronous protocols in favorable situations and to purely asynchronous ones in unfavorable situations, in terms of both throughput and latency. Ipotane also runs two paths simultaneously: 2-chain HotStuff as the optimistic path and a new primitive Dual-functional Byzantine Agreement (DBA) for the pessimistic path. DBA packs the functionalities of biased ABA and Validated Asynchronous Byzantine Agreement (VABA). In Ipotane, each replica inputs 0 to DBA if its optimistic path is faster, and 1 if its pessimistic path is faster. DBA’s ABA functionality promptly signals the optimistic path’s failure by outputting 1, ensuring Ipotane’s low latency in unfavorable situations. Meanwhile, Ipotane executes DBA instances to continuously produce pessimistic blocks through their VABA functionality. Upon detecting a failure, Ipotane commits the last two pessimistic blocks to maintain high throughput. Moreover, Ipotane leverages DBA’s biased property to ensure the safety of committing pessimistic blocks. Extensive experiments validate Ipotane’s high throughput and low latency across all situations.
Last updated:  2025-09-19
Updatable Signature from Lattices
Haotian Yin, Jie Zhang, Wanxin Li, Yuji Dong, Eng Gee Lim, and Dominik Wojtczak
Updatable Signature (US) schemes allow updating signatures so that they can be verified using a new key. This updating feature is useful for key rotation in practice. Cini et al. (PKC'21) first formalised this primitive. However, their post-quantum-secure US scheme does not satisfy their security definition, i.e., without unlinkability and only bounded unforgeability. This paper aims to solve this problem by providing a new fully secure construction. First, we simplify the definition of unlinkability by a hybrid argument, and reduce the update oracle of the unforgeability experiment by assuming unlinkability. Then, we construct our US scheme from verifiable encryption and the SIS assumption. This scheme is fully unlinkable and unforgeable, but also a unique signature scheme in each epoch, allowing only one signature for each message during one epoch and rendering a stateful signer/proxy. This is sufficient for many applications.
Last updated:  2025-09-19
LightCROSS: A Secure and Memory Optimized Post-Quantum Digital Signature CROSS
Harry Hart, Puja Mondal, Suparna Kundu, Supriya Adhikary, Angshuman Karmakar, and Chaoyun Li
Digital signature schemes derived from non-interactive zero-knowledge (NIZK) proofs are rapidly gaining prominence within post-quantum cryptography. CROSS is a promising new code-based post-quantum digital signature scheme based on the NIZK framework. It is currently in the second round of the NIST's additional call for standardization for post-quantum digital signatures. However, CROSS's reference implementation has a substantially large memory footprint. This makes its deployment on resource-constrained platforms prohibitively difficult. In this work, we address the issue of the large memory footprint with our implementation LightCROSS. In particular, we identify and propose memory-efficient algorithms for generating and processing Merkle tree and GGM tree structures, which are the most memory-intensive operations of CROSS. Apart from these, we also propose several memory optimizations techniques such as just-in-time hashing and execution flow analysis. As a result, LightCROSS reduces the memory footprint of key generation, signing, and verification of CROSS reference codes by as much as 93\%, 60\%, and 76\% respectively. Our memory optimization techniques are not specific to CROSS, but can be applied to other NIZK-based signature schemes. With regards to efficiency, matrix multiplications are crucial to the performance of CROSS. We show how the Digital Signal Processing (DSP) instructions on ARM Cortex-M4, specifically packing and multiplying, can be utilized to efficiently implement matrix operations over finite fields. The DSP optimizations combined with the memory reductions improve the efficiency of CROSS by up to 32\% and 33\% in signing and verification respectively.
Last updated:  2025-09-19
Lattice-Based Group Signatures in the Standard Model, Revisited
Nam Tran, Khoa Nguyen, Dongxi Liu, Josef Pieprzyk, and Willy Susilo
The study of lattice-based group signatures has been a prominent research direction since 2010. While recent advances in the field have yielded schemes in the random oracle model with strong security properties and nearly practical efficiency, the current state of affairs for lattice-based group signatures in the standard model is still much less satisfactory. Existing schemes, proposed by Katsumata and Yamada (EUROCRYPT'19) or implied by generic non-interactive zero-knowledge proofs for NP (by Peikert and Shiehian at CRYPTO'19 and by Waters at STOC'24), either only fulfil a weak notion of anonymity called selfless anonymity, or require a strong lattice assumption, or suffer from extremely large signatures and/or public keys. This work aims to enhance the state of affairs for lattice-based group signatures in the standard model. We provide improved constructions that simultaneously achieves: (i) signature and public key sizes significantly smaller than those of known schemes; (ii) full anonymity in the CPA and CCA senses; (iii) security based on standard SIS and LWE assumptions with polynomial approximation factors regarding worst-case lattice problems (in general lattices). Our design approach slightly departs from that of existing pairing-based and lattice-based constructions. In the design process, we adapt and develop several lattice-based cryptographic ingredients that may be of independent interest. At the heart of our constructions is a reasonably efficient non-interactive zero-knowledge proof system for relations typically appearing in advanced privacy-preserving lattice-based cryptographic protocols. These relations are addressed by a trapdoor $\Sigma$-protocol with an inverse polynomial soundness error, which is made non-interactive via the standard-model Fiat-Shamir transform of Canetti et al. (STOC'19) and a compiler by Libert et al. (ASIACRYPT'20).
Last updated:  2025-09-19
Many-time Linkable Ring Signatures
Nam Tran, Khoa Nguyen, Dongxi Liu, Josef Pieprzyk, and Willy Susilo
Linkable ring signatures (Liu et al., ACISP'04) is a ring signature scheme with a linking mechanism for detecting signatures from the same signer. This functionality has found many practical applications in electronic voting, cryptocurrencies, and whistleblowing systems. However, existing linkable ring signature schemes impose a fundamental limitation: users can issue only one signature, and after that their anonymity is not guaranteed. This limited number of usage is inadequate for many real-world scenarios. This work introduces the notion of Many-time Linkable Ring Signatures, extending the anonymity guarantees of standard linkable ring signatures. Specifically, many-time linkable ring signatures ensure that signers remain anonymous as long as the number of their signatures is smaller than a system-global threshold. Only when a signer exceeds this threshold the anonymity is lost. We formalize this via a security notion called T-anonymity, which guarantees that adversaries cannot distinguish signatures from users who have each produced at most T signatures. This new notion of anonymity generalizes one-time anonymity in previous linkable schemes, while providing stronger guarantees than existing constructions. We also present a lattice-based construction with proven security in the quantum random oracle model (QROM).
Last updated:  2025-09-18
BPSec-MLS: Asynchronous Key Agreement for Space Communications
Xisen Tian and Paul Westland
Key agreement is the cornerstone of any secure communications channel whether over the traditional internet or in delay tolerant networks used in space communications. However, space systems that rely on pre-shared keys face insurmountable limitations in both security and scalability. A single key compromise can expose all past and future encrypted communications, and the static nature of pre-shared keys prevents dynamic group membership, making it difficult to add or remove devices without invalidating entire key sets. To address these challenges, the Messaging Layer Security (MLS) protocol -- a recently introduced internet standard for asynchronous group key establishment -- offers promising capabilities. Its potential to provide efficient and dynamic key agreement for federated interplanetary architectures (e.g. government-commercial collaborations) has been previously recognized, particularly with integration of MLS with the Bundle Protocol Security (BPSec) framework. In this work, we present the first empirical findings on the integration of MLS with BPSec in a realistic space communications scenario and provide insights into its deployment in future space architectures.
Last updated:  2025-09-18
Computationally-Sound Symbolic Cryptography in Lean
Stefan Dziembowski, Grzegorz Fabiański, Daniele Micciancio, and Rafał Stefański
We present a formally-verified (in Lean 4) framework for translating symbolic cryptographic proofs into the computationally-sound ones. Symbolic cryptography is a well-established field that allows reasoning about cryptographic protocols in an abstract way and is relatively easy to verify using proof assistants. Unfortunately,  it often lacks a connection to the computational aspects of real-world cryptography. Computationally-sound cryptography, on the other hand, captures this connection much better, but it is often more complex, less accessible, and much harder to verify formally. Several works in the past have provided a bridge between the two, but, to our knowledge, none of them have been implemented in a proof assistant. We close this gap by formalizing the translation from symbolic to computationally-sound cryptography in Lean 4. Our framework is based on the work of Micciancio (Eurocrypt, 2010) and Li and Micciancio (CSF, 2018), which builds on the idea of using co-induction (instead of induction) for reasoning about an adversary's knowledge in a symbolic setting. Our work encompasses (1) the formalization of the symbolic cryptography framework, (2) the formalization of the computationally sound cryptography framework, and (3) the formalization of the translation between the two. We also provide (4) an extended example of circuit garbling, which is a well-known cryptographic protocol frequently used in secure multi-party computation. We believe that our work will serve as a foundation for future research in the area of formal verification of cryptographic protocols, as it enables reasoning about cryptographic protocols more abstractly while still providing a formally verified connection to the computational aspects of real-world cryptography.
Last updated:  2025-09-18
A Constant-Rate Compiler for MPC over Noisy Networks
Ran Gelles, Carmit Hazay, Manuj Mukherjee, Jaspal Singh, Arun Yeragudipati, and Vassilis Zikas
The study of efficient multi-party computation (MPC) has been a central focus in the cryptographic literature, producing a wide range of innovative techniques that have substantially improved the practicality of MPC in real-world applications. However, the vast majority of this work assumes reliable communication channels and neglects the impact of network-level noise—a fundamental characteristic of modern communication systems. Although classical error-correcting codes can be used to address such noise, they often introduce significant overhead, potentially offsetting the efficiency gains provided by advanced cryptographic methods. To the best of our knowledge, existing efforts to circumvent this limitation rely on alternative techniques restricted to the two-party setting, with approaches that do not naturally generalize to the multi-party case. We initiate the study of information-theoretic MPC over noisy networks for more than two parties. We construct an MPC protocol that is secure against a semi-honest majority with an optimal communication overhead in circuit size: it computes any circuit $\mathcal{C}$ with communication complexity proportional to its size, i.e., $O(|\mathcal{C}|)$. Our protocol departs from the classical MPC methodology by introducing a novel multi-party interactive-coding that remedies channel noise through a new rewinding technique that also maintains the properties required for secure computation. The ideas of our interactive coding can be applied also to other MPC scenarios to eliminate channel noise. In fact, our treatment uncovers (and resolves) a subtle flaw in the constant-rate 2PC construction by Gelles et al. [SCN~'18] over noisy channels.
Last updated:  2025-09-18
SNARK Lower Bounds via Communication Complexity
Rishabh Bhadauria, Alexander R. Block, Prantar Ghosh, and Justin Thaler
We initiate the study of lower bounding the verification time of Succinct Non-interactive ARguments of Knowledge (SNARKs) built in the Polynomial Interactive Oracle Proof + Polynomial Commitment Scheme paradigm. The verification time of these SNARKs is generally dominated by the polynomial commitment scheme, and so we want to understand if polynomial commitment schemes admit lower bounds on the verification time. By recognizing that polynomial commitment schemes are also often built by applying cryptography to some information-theoretic core protocol, we seek to separate this core from the cryptography in a way that meaningfully captures the verification time required by the polynomial commitment scheme verifier. We provide strong evidence that several polynomial commitment schemes have (nearly) optimal verifier times. Our evidence comes from connecting polynomial commitment schemes to certain information-theoretic protocols known as communication protocols from the field of communication complexity, a link which we believe to be of independent interest. Through this lens, we model the verifier work in the cryptographic protocols as information (i.e., number of bits) exchanged between parties in the communication protocols, allowing us to leverage lower bounds from communication complexity. These lower bounds give strong evidence that the verifier time in these polynomial commitment schemes must be at least the number of bits exchanged in the communication protocol. We extract the communication protocol cores of three polynomial commitment schemes and lower bound the bits exchanged in these cores. The lower bounds we obtain match (up to poly-logarithmic factors) the best-known (asymptotic) verification times of the polynomial commitment schemes we examine in this work. Specifically, we show that for univariate/multilinear polynomials of size $N=2^n$: - the communication core of Hyrax PCS (Wahby et al., S&P 2016) requires $\Omega(\sqrt{N})$ bits to be exchanged; - the communication core of Bulletproofs PCS (Bootle et al., EUROCRYPT 2016; Bünz et al., S&P 2018) requires $\Omega(N)$ bits to be exchanged; and - the communication core of Dory PCS (Lee, TCC 2021) requires $\Omega(\log(N))$ bits to be exchanged. Our results strongly suggest a negative answer to a longstanding open question on whether the Bulletproofs verifier can be made sublinear time.
Last updated:  2025-09-18
Pseudorandom Correlation Functions from Ring-LWR
Sebastian Hasler, Pascal Reisert, and Ralf Küsters
State-of-the-art actively secure multi-party computation protocols, like SPDZ (Damgård et al., CRYPTO 2012), use correlated randomness, like Beaver triples, to achieve a highly efficient online phase. For a long time, the generation of the correlated randomness in the offline phase relied on classical cryptographic primitives, like somewhat homomorphic encryption or oblivious transfer, that required significant communication. More recently, Boyle et al. (FOCS 2020) defined a new primitive called pseudorandom correlation functions (PCFs) to generate correlated randomness non-interactively. PCFs set up keys for each party in an initial interactive phase, which can then be used by the parties to generate a large number of shares of the correlated randomness without further communication. In the random oracle model (ROM), two-party PCFs can be generically constructed based on evaluating a weak pseudorandom function (WPRF) using a powerful-enough homomorphic secret sharing scheme. However, the concrete efficiency of instantiations of this approach has not been analyzed so far. There are also some works that construct PCFs based on other approaches, but they cannot be used for correlations of degree $\ge 2$ (e.g., Beaver triples) over large rings/fields (such as those used in SPDZ). In this paper, we improve the complexity and concrete efficiency of PCFs over large rings/fields by presenting a new generic PCF based on the hardness of the ring-learning with rounding (Ring-LWR) problem and FHE. We only share BFV keys in the initial interactive phase. The two parties then use the random oracle to locally sample BFV (pseudo-)ciphertexts encrypting pseudorandom plaintexts. We use a new bootstrapping algorithm for these (pseudo-)ciphertexts that reduces initially saturated noise to a level where the parties can use the homomorphic properties of the BFV scheme to correlate the encrypted randomness locally. Both parties can then produce, without further interaction, shares of the correlated randomness with their secret key share. Our new PCF works with any form of correlated randomness that can be expressed as an arithmetic circuit over a base ring $\mathbb Z_t$ or field $\mathbb F_{p^d}$, e.g., Beaver or matrix triples.
Last updated:  2025-09-18
Experience from UNITA Elections: Reconciling Revote, E2E Verifiability and Low Coercion
Feng Hao, Luke Harrison, Saverio Veltri, Irene Pugliatti, Chris Sinclair, and Gareth Nixon
This paper presents an experience of designing, building and deploying an online voting system for the Student Assembly elections in the UNITA Alliance with the following requirements. First, the system should allow voters to vote as many times as they wish before the election’s closing time with only the last vote being counted (known as revote). Second, the system should allow end-to-end (E2E) verifiability. Third, the system should allow voters to cast votes under the minimum influence from external forces or coercion. Developing an online voting system to meet these requirements poses a unique challenge. In this pa- per, we present an online voting system for UNITA elections, based on a variant of the DRE-ip protocol to provide E2E verifiability with support for revote. The system adopts a two-server architecture and implements a separation of control between the two servers to protect the voter’s anonymity. The first UNITA elections were successfully concluded in March 2025, providing a case study for reconciling revote, E2E verifiability and low coercion in a real-world setting. The use of verifiable online voting to empower students from different European universities to elect the Student Assembly also serves as a model for more inclusive democratic governance of a university alliance.
Last updated:  2025-09-18
Extract Discriminative Features: Profiled Side-Channel Analysis for Cryptosystems Based on Supervised Contrastive Learning
Zoushaojie Jiang, An Wang, Yaoling Ding, Annv Liu, Zheng Liu, Jing Yu, and Liehuang Zhu
Deep learning-based profiled side-channel analysis (SCA) targeting cryptographic implementations has attracted significant attention in recent years. Generalizable deep learning mechanisms, such as contrastive learning-based profiled SCA (CL-SCA), can enhance the effectiveness of SCA without reliance on specific network architectures and hyperparameters. This independence enables robust adaptation across diverse attack scenarios. Nonetheless, CL-SCA relies heavily on data augmentation and may mistakenly push apart physical leakage traces that should belong to the same class, which interferes with the extraction of discriminative features crucial for SCA performance. To address these limitations, we propose a profiled SCA method based on supervised contrastive learning, called SupCL-SCA. This method enhances the learning of discriminative features that facilitate key recovery by leveraging supervised information to guide the extraction of similarities in feature space. Compared with state-of-the-art methods, SupCL-SCA not only retains their general applicability and inherent advantages but also eliminates reliance on complex data augmentation and multi-stage training. Additionally, we propose a cosine distance-based Intra-Inter Distance Ratio (IIDR) metric to assess the discriminative capability of models in deep learning-based profiled SCA methods. We evaluate SupCL-SCA on three publicly available datasets covering different implementations and platforms. Experimental results show that SupCL-SCA consistently reduces the number of traces required to recover the key compared to the original methods, demonstrating enhanced attack capability.
Last updated:  2025-09-18
Threshold ECDSA in Two Rounds
Yingjie Lyu, Zengpeng Li, Hong-Sheng Zhou, and Xudong Deng
We propose the first two-round multi-party signing protocol for the Elliptic Curve Digital Signature Algorithm (ECDSA) in the threshold-optimal setting, reducing the number of rounds by one compared to the state of the art (Doerner et al., S&P '24). We also resolve the security issue of presigning pointed out by Groth and Shoup (Eurocrypt '22), evading a security loss that increases with the number of pre-released, unused presignatures, for the first time among threshold-optimal schemes. Our construction builds on Non-Interactive Multiplication (NIM), a notion proposed by Boyle et al. (PKC '25), which allows parties to evaluate multiplications on secret-shared values in one round. In particular, we use the construction of Abram et al. (Eurocrypt '24) instantiated with class groups. The setup is minimal and transparent, consisting of only two class-group generators. The signing protocol is efficient in bandwidth, with a message size of 1.9 KiB at 128-bit security, and has competitive computational performance.
Last updated:  2025-09-18
Mk-PIR: Multi-Keyword Private Information Retrieval
Shengnan Zhao, Junyu Lu, Yuchen Huang, Dongdong Miao, and Chuan Zhao
Private information retrieval (PIR) enables a client to fetch a record from databases held by untrusted servers while hiding the access pattern (index or keyword) from the servers. In practical settings, however, data objects (e.g., articles, videos) are commonly tagged with multiple identifiers, which can be structured as {index, value, keywords}. Current PIR schemes are constrained to retrieving records based on a single index or a single keyword, and cannot efficiently handle conjunctive queries requiring multiple keywords. To address this limitation, we propose Mk-PIR, a PIR scheme that enables a client to retrieve records that match all specified keywords simultaneously. We propose two distinct constructions: $\textsf{MkPIR}^\mathbf{I}$, an inverted-index-based solution built upon our Oblivious Set Intersection (OSI) primitive, which enables private intersection computation on the server side without revealing client queries; and $\textsf{MkPIR}^\mathbf{F}$, a forward-index-based solution utilizing our Private Subset Determination (PSD), which privately outputs matching indices by verifying subset relationships. Two constructions adapt to diverse database configurations where keywords are not required to be the primary key. Experimental results show that the average time to determine whether an index satisfies multiple keywords ranges from 0.5 to 332 ms, demonstrating that Mk-PIR achieves flexible query capabilities with modest performance overhead.
Last updated:  2025-09-18
Generic Anonymity Wrapper for Messaging Protocols
Lea Thiemt, Paul Rösler, Alexander Bienstock, Rolfe Schmidt, and Yevgeniy Dodis
Modern messengers use advanced end-to-end encryption protocols to protect message content even if user secrets are ever temporarily exposed. Yet, encryption alone does not prevent user tracking, as protocols often attach metadata, such as sequence numbers, public keys, or even plain user identifiers. This metadata reveals the social network as well as communication patterns between users. Existing protocols that hide metadata in Signal (i.e., Sealed Sender), for MLS-like constructions (Hashimoto et al., CCS 2022), or in mesh networks (Bienstock et al., CCS 2023) are relatively inefficient or specially tailored for only particular settings. Moreover, all existing practical solutions reveal crucial metadata upon exposures of user secrets. In this work, we introduce a formal definition of Anonymity Wrappers (AW) that generically hide metadata of underlying two-party and group messaging protocols. Our definition captures forward and post-compromise anonymity as well as authenticity in the presence of temporary state exposures. Inspired by prior wrapper designs, the idea of our provably secure AW construction is to use shared keys of the underlying wrapped (group) messaging protocols to derive and continuously update symmetric keys for hiding metadata. Beyond hiding metadata on the wire, we also avoid and hide structural metadata in users' local states for stronger anonymity upon their exposure. We implement our construction, evaluate its performance, and provide a detailed comparison with Signal's current approach based on Sealed Sender: Our construction reduces the wire size of small 1:1 messages from 441 bytes to 114 bytes. For a group of 100 members, it reduces the wire size of outgoing group messages from 7240 bytes to 155 bytes. We see similar improvements in computation time for encryption and decryption, but these improvements come with substantial storage costs for receivers. For this reason, we develop extensions with a Bloom filter for compressing the receiver storage. Based on this, Signal considers deploying our solution.
Last updated:  2025-09-18
Lattice Reduction via Dense Sublattices: A Cryptanalytic No-Go
Léo Ducas and Johanna Loyer
Most concrete analyses of lattice reduction focus on the BKZ algorithm or its variants relying on Shortest Vector Problem (SVP) oracles. However, a variant by Li and Nguyen (Cambridge U. Press 2014) exploits more powerful oracles, namely for the Densest rank-$k$ Sublattice Problem ($DSP_k$) for $k \geq 2$. We first observe that, for random lattices, $DSP_2$ --and possibly even $DSP_3$-- seems heuristically not much more expensive than solving SVP with the current best algorithm. We indeed argue that a densest sublattice can be found among pairs or triples of vectors produced by lattice sieving, at a negligible additional cost. This gives hope that this approach could be competitive. We therefore proceed to a heuristic and average-case analysis of the slope of $DSP_k$-BKZ output, inspired by a theorem of Kim (Journal of Number Theory 2022) which suggests a prediction for the volume of the densest rank-$k$ sublattice of a random lattice. Under this heuristic, the slope for $k=2$ or $3$ appears tenuously better than that of BKZ, making this approach less effective than regular BKZ using the "Dimensions for Free'' of Ducas (EUROCRYPT 2018). Furthermore, our experiments show that this heuristic is overly optimistic. Despite the hope raised by our first observation, we therefore conclude that this approach appears to be a No-Go for cryptanalysis of generic lattice problems.
Last updated:  2025-09-18
On the Explanation and Enhancement of Neural-inspired Differential Cryptanalysis
Weixi Zheng, Liu Zhang, and Zilong Wang
Neural networks have been applied to symmetric cryptanalysis, and Gohr demonstrated that a neural-network-based distinguisher achieves higher accuracy than classical differential distinguishers at CRYPTO 2019. In this work, we analyze ciphertext data through the lens of probability distributions, identifying non-random features that provide indirect insights into how neural networks distinguish ciphertexts from random data. In parallel, we improve the key-recovery attack process by adopting the Bayesian-UCB method, which achieves a better balance between exploration and exploitation of ciphertext structures. These enhancements reduce the runtime of key-recovery attacks while simultaneously increasing their success rate.
Last updated:  2025-09-18
Searchable Encryption for Conjunctive Queries with Extended Forward and Backward Privacy
Cong Zuo, Shangqi Lai, Shi-Feng Sun, Xingliang Yuan, Joseph K. Liu, Jun Shao, Huaxiong Wang, Liehuang Zhu, and Shujie Cui
Recent developments in the field of Dynamic Searchable Symmetric Encryption (DSSE) with forward and backward privacy have attracted much attention from both research and industrial communities. However, most DSSE schemes with forward and backward privacy schemes only support single keyword queries, which impedes its prevalence in practice. Although some forward and backward private DSSE schemes with expressive queries (e.g., conjunctive queries) have been introduced, their backward privacy either essentially corresponds to single keyword queries or forward privacy is not comprehensive. In addition, the deletion of many DSSE schemes is achieved by addition paired with a deletion mark (i.e., lazy deletion). To address these problems, we present two novel DSSE schemes with conjunctive queries (termed \texttt{SDSSE-CQ} and \texttt{SDSSE-CQ-S}), which achieve both forward and backward privacy. To analyze their security, we present two new levels of backward privacy (named Type-O and Type-O$^-$, more and more secure), which give a more comprehensive understanding of the leakages of conjunctive queries in the \texttt{OXT} framework. Eventually, the security analysis and experimental evaluations show that the proposed schemes achieve better security with reasonable computation and communication increase.
Last updated:  2025-09-18
Multi-signature in Fully Split Ring and Quantum Random Oracle Model
Shimin Pan, Tsz Hon Yuen, and Siu-Ming Yiu
Multi-signature schemes are crucial for secure operations in digital wallets and escrow services within smart contract platforms, particularly in the emerging post-quantum era. Existing post-quantum multi-signature constructions either do not address the stringent requirements of the Quantum Random Oracle Model (QROM) or fail to achieve practical efficiency due to suboptimal parameter choices. In this paper, we present a novel multi-signature scheme designed to be secure in the QROM and optimized for practical use. Our scheme operates over the polynomial ring $\mathbb{Z}_q[X]/(x^n+1)$ with $q \equiv 1 \pmod{2n}$, enabling full splitting of the ring and allowing for efficient polynomial arithmetic via the Number Theoretic Transform (NTT). This structure not only ensures post-quantum security but also bridges the gap between theoretical constructs and real-world implementation needs. We further propose a new hardness assumption, which is termed $\nu$-{\sf SelfTargetMSIS}, extending {\sf SelfTargetMSIS} (Eurocrypt 2018) to accommodate multiple challenge targets. We prove its security in the QROM and leverage it to construct a secure and efficient multi-signature scheme. Our approach avoids the limitations of previous techniques, reduces security loss in the reduction, and results in a more compact and practical scheme suitable for deployment in post-quantum cryptographic systems.
Last updated:  2025-09-18
Carousel: Fully Homomorphic Encryption with Bootstrapping over Automorphism Group
Intak Hwang, Seonhong Min, and Yongsoo Song
Homomorphic Encryption (HE) enables the secure computation of functions on ciphertexts without requiring decryption. Specifically, AP-like HE schemes exploit an intrinsic bootstrapping method called blind rotation. In existing blind rotation methods, a look-up table is homomorphically evaluated on the input ciphertext through iterative multiplication of monomials. However, the algebraic structure of the multiplicative group of monomials imposes certain limitations on the input plaintext space, as it can bootstrap only a fraction of the input plaintext space. In this work, we introduce a new HE scheme, Carousel, that solves this problem. The key idea of our approach is to utilize the automorphism group instead of monomials. More specifically, the look-up table is encoded into a single polynomial that can be rotated via a series of homomorphic multiplications and automorphisms. We instantiate Carousel with subring encoding proposed by Arita and Handa (ICISC '17) and provide a proof-of-concept implementation. Our benchmark result shows that Carousel can bootstrap 4-bit integers in under 30ms.
Last updated:  2025-09-18
MIZAR: Boosting Secure Three-Party Deep Learning with Co-Designed Sign-Bit Extraction and GPU Acceleration
Ye Dong, Xudong Chen, Xiangfu Song, Yaxi Yang, Tianwei Zhang, and Jin-Song Dong
Three-party secret sharing-based computation has emerged as a promising approach for secure deep learning, benefiting from its high throughput. However, it still faces persistent challenges in computing complex operations such as secure Sign-Bit Extraction, particularly in high-latency and low-bandwidth networks. A recent work, Aegis (Lu et al., Cryptology ePrint'2023), made significant strides by proposing a constant-round DGK-style Sign-Bit Extraction protocol with GPU acceleration on Piranha (Watson et. al., USENIX Security'2022). However, Aegis exhibits two critical limitations: it \romannumeral1) overlooks the use of \textit{bit-wise prefix-sum}, and \romannumeral2) inherits non-optimized modular arithmetic over prime fields and excessive memory overhead from the underlying GPU-based MPC framework. This results in suboptimal performance in terms of communication, computation, and GPU memory usage. Driven by the limitations of Aegis, we propose an optimized constant-round secure Sign-Bit Extraction protocol with communication and GPU-specific optimizations. Concretely, we construct a new masked randomized list by exploiting the upper bound of bit-wise prefix-sum to reduce online communication by up to $50\%$, and integrate fast modular-reduction and kernel fusion techniques to enhance GPU utilization in MPC protocols. Besides, we propose specific optimizations for secure piecewise polynomial approximations and Maxpool computation in neural network evaluations. Finally, we instantiate these protocols as a framework MIZAR and report their improved performance over state-of-the-art GPU-based solutions: \romannumeral1) For secure Sign-Bit Extraction, we achieve a speedup of $2$--$2.5\times$ and reduce communication by $2$--$3.5\times$. \romannumeral2) Furthermore, we improve the performance of secure evaluation of nonlinear functions and neural networks by $1.5$--$3.5\times$. \romannumeral3) Lastly, our framework achieves $10\%$--$50\%$ GPU memory savings.
Last updated:  2025-09-17
Quasi-perfect (de)compression of elliptic curve points in the highly $2$-adic scenario
Dimitri Koshelev
This short note is devoted to a significant enhancement of [8] by resolving satisfactorily the problem formulated at the end of that article. More precisely, a new laconic, secure, and efficient (de)compression method is provided for points of any elliptic curve over any highly $2$-adic finite field of large characteristic. Such fields are ubiquitous in modern elliptic curve cryptography, whereas they severely slow down the conventional $x$-coordinate (de)compression technique. In comparison with the main method from the cited work, the new one requires neither complicated mathematical formulas nor conditions on the curve. Thereby, the current work is universal and much more implementation-friendly, which justifies its existence, despite the absence of interesting mathematics behind it. 8. Koshelev, D.: Point (de)compression for elliptic curves over highly $2$-adic finite fields. Advances in Mathematics of Communications 19(5), 1539–1559 (2025), https://doi.org/10.3934/amc.2025008
Last updated:  2025-09-17
Back to the future: simple threshold decryption secure against adaptive corruptions
Victor Shoup
We present a practical, non-interactive threshold decryption scheme. It can be proven CCA secure with respect to adaptive corruptions in the random oracle model under the decisional Diffie-Hellman assumption. Our scheme, called TDH2a, is a minor modification on the TDH2 scheme presented by Shoup and Gennaro at Eurocrypt 1998, which was proven secure against static corruptions under the same assumptions. The design and analysis of TDH2a are based on a straightforward extension of the simple information-theoretic argument underlying the security of the Cramer-Shoup encryption scheme presented at Crypto 1998. We also present another variant, TDH2abc, which offers a higher degree of backward compatibility with TDH2, allowing one to effectively "hot swap" TDH2abc in to replace TDH2 in a "live" system, without changing the public encryption key, so that any extant ciphertexts remain decryptable.
Last updated:  2025-09-17
IVC in the Open-and-sign Random Oracle Model
Mary Maller, Nicolas Mohnblatt, and Arantxa Zapico
Incrementally verifiable computation (IVC) is a powerful cryptographic primitive, particularly suited for proving long-running machine computations. Previous work shows that IVC can be constructed by recursively composing SNARKs. Unfortunately, theoretical challenges limit the provable security of known IVC constructions. Recursive composition may quickly lead to a blowup in extraction time and may require arithmetic circuits to enforce constraints about random oracle calls. Furthermore, composition presents a practical challenge: proofs are often expressed in a form that is not friendly to the arithmetic circuits that produce them. To mitigate the theoretical challenges, we present the Open-and-Sign Random Oracle Model (osROM) as an extension to the signed random oracle of Chiesa and Tromer (ICS '10). This model, while strictly harder to instantiate than the Random Oracle Model, allows the design of protocols that can efficiently verify calls to the oracle and support straight-line extractors. As a result, IVC constructions in the osROM can be shown to have provable security for polynomial depths of computation. Under our new model, we construct a framework to build secure IVC schemes from simple non-interactive reductions of knowledge. Our construction natively supports cycles of elliptic curves in the style of Ben-Sasson et al. (CRYPTO '14), thus answering the practical challenge outlined above. Finally, we analyze the HyperNova (CRYPTO '24) IVC scheme in the osROM and show that it is secure over a two-cycle of elliptic curves, for polynomial depths of computation.
Last updated:  2025-09-17
Combined Stability: Protecting against Combined Attacks
Dilara Toprakhisar, Svetla Nikova, and Ventzislav Nikov
Physical attacks pose serious challenges to the secure implementation of cryptographic algorithms. While side-channel analysis (SCA) has received significant attention, leading to well-established countermeasures, fault attacks and especially their combination with SCA (i.e., combined attacks) remain less researched. Addressing such combined attacks often requires a careful integration of masking and redundancy techniques to resist the reciprocal effects of faults and probes. Recent research on combined security has gained momentum, with most approaches relying on composable security notions involving error correction, typically applied after each nonlinear operation. While effective, this approach introduces an area and performance overhead, along with additional security challenges posed by the correction circuits themselves. In this work, we take a different direction, following the concept of stability introduced in StaTI (CHES 2024), which ensures fault propagation to protect against ineffective faults. We extend this concept to combined security by proposing a new composable security notion, combined stability, which integrates an extended stability notion, diffused stability, with arbitrarily composable glitch-extended probing security notions. Notably, this framework requires only a single error detection at the end of the computation, avoiding costly intermediate error checks and corrections. To demonstrate practicality, we describe a combined secure AES S-box hardware implementation. Our results show that this approach, achieving combined security with competitive implementation costs, offers a promising alternative to error-correction-based schemes.
Last updated:  2025-09-17
Efficient post-quantum commutative group actions from orientations of large discriminant
Marc Houben
We describe an algorithm to efficiently evaluate class group actions on supersingular elliptic curves that are oriented by an imaginary quadratic order of arbitrarily large discriminant. Contrary to CSIDH, this allows to increase the post-quantum security of the group action without increasing the size of the base field. In particular, we describe instances where Kuperberg's algorithm loses to generic supersingular isogeny path finding. Our algorithm is fully deterministic, strictly constant time, dummy free, and can be implemented without conditional branches. We show that the (restricted effective) group action can be employed in a non-interactive key exchange protocol, that we argue is asymptotically more efficient than CSIDH.
Last updated:  2025-09-17
Pilvi: Lattice Threshold PKE with Small Decryption Shares and Improved Security
Valerio Cini, Russell W. F. Lai, and Ivy K. Y. Woo
Threshold public-key encryption (tPKE) enables any subset of $t$ out of $K$ parties to decrypt non-interactively, while any ciphertext remain secure if less that $t$ decryption shares are known. Despite recent progress, existing lattice-based tPKEs face at least one of the following drawbacks: (1) having large decryption share size -- polynomial in $K$ and some even exponential in $t$, (2) proven secure only against relaxed security models where the adversary is not allowed to see decryption shares of challenge ciphertexts, and (3) lack of concrete efficiency, in particular due to the requirement of super-polynomial modulus for noise flooding. We present $\mathsf{Pilvi}$, a new thresholdised variant of Regev’s public-key encryption scheme, which achieves both small decryption shares and a strong form of simulation-based security under the Learning with Errors (LWE) assumption. Our construction has decryption share size $t \cdot \log K \cdot \mathsf{poly}(\lambda)$ and allows the use of a polynomial-size modulus assuming an a priori bound on the number of queries $Q$. It remains secure even when an adaptive adversary requests partial decryptions of both challenge and non-challenge ciphertexts, as long as for each ciphertext the number of corrupt parties plus the number of shares obtained is less than $t$. We provide concrete parameter suggestions for 128-bit security for a wide range of $(t,K,Q)$, including cases where $t \approx K/2$ for up to $K \leq 32$ users and $Q \leq 2^{60}$ partial decryption queries. The ciphertext size ranges from $14$ to $58$ KB and the partial decryption share size ranges from $1$ to $4$ KB. Along the way, we abstract out a general purpose tool called the threshold-LWE assumption, which we prove to follow from LWE. The threshold-LWE assumption captures the core steps in security proofs of schemes involving Shamir's secret-sharing the LWE secret with carefully chosen evaluation points, the algebraic structures from the latter being what enabling the efficiency of our tPKE scheme. As an additional application, we also show how to construct distributed pseudorandom functions (dPRFs) from the threshold-LWE assumption.
Last updated:  2025-09-17
A Tight Quantum Algorithm for Multiple Collision Search
Xavier Bonnetain, Johanna Loyer, André Schrottenloher, and Yixin Shen
Searching for collisions in random functions is a fundamental computational problem, with many applications in symmetric and asymmetric cryptanalysis. When one searches for a single collision, the known quantum algorithms match the query lower bound. This is not the case for the problem of finding multiple collisions, despite its regular appearance as a sub-component in sieving-type algorithms. At EUROCRYPT 2019, Liu and Zhandry gave a query lower bound $\Omega{}(2^{m/3 + 2k/3})$ for finding $2^k$ collisions in a random function with $m$-bit output. At EUROCRYPT 2023, Bonnetain et al. gave a quantum algorithm matching this bound for a large range of $m$ and $k$, but not all admissible values. Like many previous collision-finding algorithms, theirs is based on the MNRS quantum walk framework, but it chains the walks by reusing the state after outputting a collision. In this paper, we give a new algorithm that tackles the remaining non-optimal range, closing the problem. Our algorithm is tight (up to a polynomial factor) in queries, and also in time under a quantum RAM assumption. The idea is to extend the chained walk to a regime in which several collisions are returned at each step, and the "walks" themselves only perform a single diffusion layer.
Last updated:  2025-09-17
Multi User Security of LightMAC and LightMAC_Plus
Nilanjan Datta, Shreya Dey, Avijit Dutta, and Devdutto Kanungo
LightMAC is one of the ISO/IEC standardized message authentication codes that provably achieves security roughly in the order of O(q^2/2^n), where q is the total number of queries and n is the block size of the underlying block cipher. In a subsequent work, Naito proposed a beyond-birthday-bound variant of the LightMAC construction, dubbed LightMAC_Plus, and demonstrated that it achieves 2n/3-bit PRF security. Later in EUROCRYPT'20, Kim et al. improved the security bound of LighMAC_Plus from 2n/3 bits to 3n/4 bits. However, all these security results have been proven in the single-user setting, where we assume that the adversary has access to a single instance of the construction. In this paper, we investigate, for the first time, the security of the LightMAC and the LightMAC_Plus constructions in the context of the multi-user setting, where we assume that the adversary has access to more than one instance of the construction. In particular, we have shown that LightMAC offers O(qq_{max}l k/2^n + up/2^k) multi-user PRF security, where q denotes the total number of construction queries, p denotes the total number of offline primitive queries, q_{max} denotes the maximum number of queries per user, $u$ denotes the total number of users and l denotes the maximum number of message blocks in a query. We have also shown that LightMAC_Plus maintains security up to approximately 2^{2n/3} construction queries and 2^{2k/3} ideal-cipher queries in the ideal-cipher model, where n denotes the block size and k denotes the key size of the block cipher. In this paper, we investigate, for the first time, the security of the LightMAC and the LightMAC_Plus constructions in the context of the multi-user setting, where we assume that the adversary has access to more than one instance of the construction. In particular, we show that LightMAC offers O(qq_{max}l k/2^n + up/2^k) multi-user PRF security, where q denotes the total number of construction queries, p denotes the total number of offline primitive queries, q_{max} denotes the maximum number of queries per user, $u$ denotes the total number of users, and l denotes the maximum number of message blocks in a query. We also show that LightMAC_Plus maintains security up to approximately 2^{2n/3} construction queries and 2^{2k/3} ideal-cipher queries in the ideal-cipher model, where n denotes the block size and k denotes the key size of the block cipher.
Last updated:  2025-09-17
Traceable Threshold Encryption without a Trusted Dealer
Jan Bormet, Jonas Hofmann, and Hussien Othman
The fundamental assumption in $t$-out-of-$n$ threshold encryption is that the adversary can only corrupt fewer than $t$ parties. However, this may be unrealistic in practical scenarios where shareholders could have financial incentives to collude. Boneh, Partap, and Rotem (Crypto'24) addressed the case where $t$ or more shareholders collude, adding a traceability mechanism to identify at least one colluder. Their constructions require a trusted dealer to distribute secret shares, but it is unclear how to achieve traceability without this trusted party. Since threshold encryption aims to avoid a single point of failure, a natural question is whether we can construct an efficient, traceable threshold encryption scheme without relying on a trusted dealer. This paper presents two dealerless, traceable threshold encryption constructions by extending the PLBE primitive of Boneh et al. (Eurocrypt'06) and combining it with the silent setup threshold encryption construction of Garg et al. (Crypto'24). Our first construction achieves an amortized ciphertext size of $O(1)$ (for $O(n)$ ciphertexts), and the second achieves constant ciphertext size in the worst case but with a less efficient preprocessing phase. Both have constant secret key sizes and require no interaction between parties. A limitation of Boneh et al.’s constructions is that they only guarantee identifying one colluder, leaving the problem of tracing more traitors unsolved. We address this by applying a technique to our first construction that enables tracing up to $t$ traitors.
Last updated:  2025-09-17
IPCrypt: Optimal, Practical Encryption of IP Addresses for Privacy and Measurement
Frank Denis
This paper introduces efficient, practical methods for encrypting IPv4/IPv6 addresses while preserving utility in logs, telemetry, and third-party data exchange. We focus on three practical goals: (i) format-compatible encryption that keeps outputs in the IPv6 address space and handles IPv4 inputs canonically; (ii) prefix-preserving encryption that retains network structure for analytics while hiding host identity; and (iii) non-deterministic encryption that resists correlation while remaining compact and invertible. We give deterministic, prefix-preserving, and two non-deterministic variants, with security models and arguments under standard assumptions, plus explicit usage bounds and operating limits. We also relate each variant to known efficiency lower bounds (ciphertext expansion and primitive calls) and state our claims within deployable parameter ranges.
Last updated:  2025-09-17
Pairwise independence of AES-like block ciphers
Tim Beyne, Gregor Leander, and Immo Schütt
We show that $4r + 4$ rounds of a variant of the AES with independent and uniform random round keys are $\varepsilon$-pairwise independent with $\varepsilon = 2^{14}\, 2^{-30r}$. We deduce this bound from a two-norm version of pairwise-independence for SHARK-type ciphers based on the third-largest singular value of the difference-distribution table of the S-box. This approach was worked out in the master thesis of Immo Schütt. Our bounds leave room for improvement, both in the constant prefactor $2^{14}$ — due to a rough conversion between norms — and in the exponent. These improvements will be worked out in an extended version of this note.
Last updated:  2025-09-17
Revisiting Adaptively Secure IBE from Lattices with Smaller Modulus: A Conceptually Simple Framework with Low Overhead
Weidan Ji, Zhedong Wang, Lin Lyu, and Dawu Gu
Most adaptively secure identity-based encryption (IBE) constructions from lattices in the standard model follow the framework proposed by Agrawal et al. (EUROCRYPT 2010). However, this framework has an inherent restriction: the modulus is quadratic in the trapdoor norm. This leads to an unnecessarily large modulus, reducing the efficiency of the IBE scheme. In this paper, we propose a novel framework for adaptively secure lattice-based IBE in the standard model, that removes this quadratic restriction of modulus while keeping the dimensions of the master public key, secret keys, and ciphertexts unchanged. More specifically, our key observation is that the original framework has a \textit{natural} cross-multiplication structure of trapdoor. Building on this observation, we design two novel algorithms with non-spherical Gaussian outputs that fully utilize this structure and thus remove the restriction. Furthermore, we apply our framework to various IBE schemes with different partitioning functions in both integer and ring settings, demonstrating its significant improvements and broad applicability. Besides, compared to a concurrent work by Ji et al. (PKC 2025), our framework is significantly simpler in design, and enjoys a smaller modulus, a more compact master public key and shorter ciphertexts.
Last updated:  2025-09-16
Web3 Recovery Mechanisms and User Preferences
Easwar Vivek Mangipudi, Panagiotis Chatzigiannis, Konstantinos Chalkias, Aniket Kate, Mohsen Minaei, and Mainack Mondal
In a Web3 (blockchain) setting, account recovery allows users to regain access to their accounts after losing their authentication credentials. Although recovery mechanisms are well-established and extensively analyzed in the context of Web2 systems, Web3 presents distinct challenges. Web3 account access is typically tied to cryptographic key pairs, and private keys are not entrusted to centralized entities. This design improves security, but significantly complicates the recovery process, making it difficult or even impossible for users to regain access after loss of keys. Given the critical role that recovery plays in ensuring long-term feasibility and trust in digital systems, a range of recovery mechanisms has been proposed to accommodate the unique properties of Web3. These mechanisms aim to help users manage key loss without introducing undue friction or risk. Although there has been an exponential increase in the use of cryptocurrency wallets in the last decade, the popularity and usage of the corresponding recovery mechanisms remain unclear. Furthermore, it is still unclear how users perceive these recovery mechanisms and what they expect from them. In this work, our objective is to empirically understand and analyze user perceptions of the various recovery mechanisms. To this end, we conducted a user survey of 331 participants and asked them to rate different mechanisms on usability, security, and availability. The results show interesting aspects of the user preferences, including their view of sharing keys among different devices and trusting their friends or family. Based on our findings, we provide insight and future directions for the developer and research community.
Last updated:  2025-09-16
Breaking Omertà: On Threshold Cryptography, Smart Collusion, and Whistleblowing
Mahimna Kelkar, Aadityan Ganesh, Aditi Partap, Joseph Bonneau, and S. Matthew Weinberg
Cryptographic protocols often make honesty assumptions---e.g., fewer than $t$ out of $n$ participants are adversarial. In practice, these assumptions can be hard to ensure, particularly given monetary incentives for participants to collude and deviate from the protocol. In this work, we explore combining techniques from cryptography and mechanism design to discourage collusion. We formalize protocols in which colluders submit a cryptographic proof to whistleblow against their co-conspirators, revealing the dishonest behavior publicly. We provide general results on the cryptographic feasibility, and show how whistleblowing fits a number of applications including secret sharing, randomness beacons, and anonymous credentials. We also introduce smart collusion---a new model for players to collude. Analogous to blockchain smart contracts, smart collusion allows colluding parties to arbitrarily coordinate and impose penalties on defectors (e.g., those that blow the whistle). We show that unconditional security is impossible against smart colluders even when whistleblowing is anonymous and can identify all colluding players. On the positive side, we construct a whistleblowing protocol that requires only a small deposit and can protect against smart collusion even with roughly $t$ times larger deposit.
Last updated:  2025-09-16
Honest Users Make Honest Mistakes: A Framework for Analysing eID Protocols
Ole Martin Edstrøm, Kristian Gjøsteen, Hans Heum, Sjouke Mauw, and Felix Stutz
Electronic identification (eID) protocols and federated identity management systems play an increasingly important role in our modern society, both on the internet through services from Google and others, and through the eIDAS regulation in Europe. A key feature of eID protocols is that humans are intimately involved in the protocol, often responsible for critical security steps. Traditional security analyses of such protocols typically assume flawless user behaviour, yet widespread real-world adoption makes user mistakes inevitable. We present a framework for analysing the security of eID protocols that can model users making mistakes. It is suitable for automated analysis with Tamarin and supports fine-grained corruption modelling of protocol actors. We demonstrate the framework's utility by describing and analysing common eID protocols based on passwords, mobile applications and authentication tokens, as well as by systematically evaluating the impact of various combinations of user mistakes on security.
Last updated:  2025-09-16
Toss: Garbled PIR from Table-Only Stacking
Lucien K. L. Ng and Vladimir Kolesnikov
Garbled Circuits (GC) are a foundational primitive for secure two-party computation (2PC). Garbled Private Information Retrieval (GPIR) is a GC technique for looking up a public array or database (DB) on a private index unknown to either party. GPIR immediately enables GC evaluation of functions implemented as a publicly known lookup table (LUT). However, GPIR is costly. It can be realized by a linear scan, by adapting Garbled RAM, by stacking GC branches implementing access to table elements, and, most recently, via the GC lookup table scheme logrow (Heath et al., Eurocrypt 2024). For an $N$-row DB lookup of $m$-bit rows, logrow has computation cost $\approx O(N m \kappa)$ and communication cost $O(m(\log N \cdot \kappa + N))$. This makes logrow practical only for tables up to about $2^{15}$ rows. We propose Toss, a new efficient GPIR protocol with dramatically reduced bandwidth consumption—an especially scarce resource in MPC—both asymptotically and concretely. Our communication cost is $O\!\left(\sqrt{N}\, m \sqrt{\kappa}\right)$ with a small constant, which is sublinear in both $N$ and the security parameter $\kappa$. Our computation cost is $O\!\left(N m \kappa + \bigl(\sqrt{N/\kappa}\, m + N\bigr) c_\kappa \right)$, where $c_\kappa$ is the cost of a hash evaluation. This computation cost is comparable to, or slightly lower than, that of logrow. In concrete terms, for a $2^{20}$-row LUT of 8-bit items, we achieve more than a $31\times$ reduction in communication compared to logrow. On a laptop over a 100 Mbps channel, throughput increases from approximately $10.6$ lookups/s to $81$ lookups/s, a $>7.5\times$ improvement. On a 10 Mbps channel, Toss achieves more than $28\times$ better throughput. The improvement grows with $N$; for example, for $N=2^{25}$ and $m=32$, the gain exceeds $512\times$. Toss builds on stacked garbling (SGC) and logrow, incorporating multiple low-level optimizations and requiring a reworking of their internals and interfaces. We emphasize that constructing GPIR directly from SGC incurs logarithmic computational overhead, which decreases throughput in typical “laptop + LAN” testbeds. Our design avoids this pitfall. We implement Toss and report on its performance, demonstrating its substantial communication savings and practical efficiency.
Last updated:  2025-09-16
FHEMaLe: Framework for Homomorphic Encrypted Machine Learning
B PRADEEP KUMAR REDDY, SAMEEKSHA GOYAL, RUCHIKA MEEL, and Ayantika Chatterjee
Machine learning (ML) has revolutionized various industries by leveraging predictive models and data-driven insights, often relying on cloud computing for large-scale data processing. However, this dependence introduces challenges such as bandwidth constraints and network latency. Edge computing mitigates these issues by enabling localized processing, reducing reliance on continuous cloud connectivity, and optimizing resource allocation for dynamic workloads. Given the limited computational capacity of sensory nodes in ML systems, edge devices provide an effective solution by offloading processing tasks. However, a critical challenge in this paradigm is to ensure user privacy while handling sensitive data both in the cloud and in edge processing. To address this, we propose a Fully Homomorphic Encryption (FHE) enabled framework that enables ML computations directly on encrypted data, eliminating need for decryption. The main challenge to design such framework is that ML complex implementation steps need to be revisited with suitable optimizations to match FHE processing requirements. There are different standard libraries to support basic computation blocks on which encrypted ML processing is to be developed. These libraries vary in supported computation operators, computational complexity and memory demands. Those in-turn introduces latency and throughput challenges, especially on resource-constrained edge nodes. For example, in general HE library CKKS(Cheon-Kim-Kim-Song) with packing and approximate homomorphic operation support is known to be the best choice for privacy preserving AI algorithm implementation. However, analysis shows leveled CKKS is limited in implementing complex operators and hence not suitable for few specific ML algorithms like KNN, Logistic Regression or general activations in NN etc without any approximation. To avoid accuracy drops associated with approximations, Torus based FHE library (TFHE) can be a better choice to make certain ML implementations feasible. Moreover, our study shows compared to TFHE, CKKS with huge memory requirement is not suitable for resource constrained edge. Thus, underlying library choice to design such framework is crucial considering the trade-off between latency and accuracy. In this work, we propose an integrated framework FHEMaLe for encrypted ML processing which takes model architecture, desired accuracy, and platform preference as inputs and based on that appropriate execution environment is selected: a cloud platform leveraging the CKKS homomorphic encryption library or an edge platform using the TFHE library. Further, analysis shows the limitation of performing FHE ML on a single edge device and hence our framework partitions encrypted data, transmits it via a fabric API, and performs distributed encrypted ML computations across the edge cluster. We implement distributed ML inference for algorithms such as 𝐾-Nearest Neighbors (KNN) (Cloud CKKS=248 sec, Edge TFHE=37 min), Support Vector Machine (SVM) (Cloud CKKS=18 sec, Edge TFHE=4.15 min), and Logistic Regression (LR) ( Cloud CKKS=17 sec, Edge TFHE=7.82 min) on a cluster of 11 edge nodes. This work explains why KNN suffers from a major performance bottleneck in encrypted domain and may not be a great choice for encrypted ML processing without application specific optimizations. Furthermore, our encrypted operators are capable of supporting encrypted NN processing (Cloud CKKS= 57 sec), but we explain why CKKS is a preferred choice in this case. The distributed nature of our implementation shows a promise of further improvement and scalability with the support of larger cluster.
Last updated:  2025-09-16
DNDK: Combining Nonce and Key Derivation for Fast and Scalable AEAD
Shay Gueron and Thomas Ristenpart
Authenticated encryption with associated data (AEAD) schemes are responsible for securing increasingly critical digital infrastructure. Unfortunately, current widely deployed schemes suffer from various limitations that make them difficult to use securely in practice. Popular schemes like AES-GCM limit the amount of data that can be encrypted with a single key, preventing secure scaling to modern workloads. At the same time, practitioners may not be able to move away from the use of AES-GCM due to mature and widely deployed implementations, legacy constraints, and compliance. In this paper, we provide approaches to improve the secure scaling of AEAD schemes via what we call derived-nonce, derived-key (DNDK) transforms. At a high level, such transforms use a root key to derive a nonce and key for use with an underlying scheme. The challenge is doing so in a way that introduces as little overhead as possible, while relying on a small number of assumptions on the used primitives. We provide some general results about secure scaling transforms and a concrete design for AES-GCM that is called DNDK-GCM. It requires as little as three additional AES calls to enable use of the same key to encrypt up to $2^{64}$ bytes of data, even when using random nonces. We also provide a detailed performance analysis. DNDK-GCM is now a draft IETF standard, and is already deployed at cloud scale by companies including Meta.
Last updated:  2025-09-16
A Symmetric Group-Based Public-Key Cryptosystem with Secret Partition-Dependent Decryption
Kaveh Dastouri
We present a purely theoretical public-key cryptosystem based on the symmetric group \( \Sym_n \) and a one-way function derived from conjugacy class sizes. The secret key is a carefully chosen partition \(\lambda \vdash n\), and the public key is \(f(\lambda) = |C_\lambda| \cdot m_1(\lambda)\). Decryption inherently requires knowledge of \(\lambda\) to compute \(\phi(f(\lambda))\) or equivalently to factor \(f(\lambda)\). The system combines combinatorial inversion hardness and integer factorization difficulty, ensuring that only someone who knows \(\lambda\) can decrypt. Historical context, worked examples, and theoretical security analysis are included.
Last updated:  2025-09-16
Proving the Security of PeerDAS without the AGM
Benedikt Wagner and Arantxa Zapico
Data availability sampling (DAS) enables clients to verify availability of data without downloading it entirely. This concept is crucial to Ethereum's roadmap. An instantiation of this concept, known as PeerDAS, relies at its core on a variant of KZG polynomial commitments and is set to be integrated into Ethereum. To assess the security of PeerDAS, Wagner and Zapico (ePrint 2024) provided a formal analysis, proving its security as a cryptographic primitive. However, their proof relies on the algebraic group model - an idealized framework known to be uninstantiable (Zhandry, CRYPTO 2022). In this work, we establish the security of \peerdas in the standard model under falsifiable assumptions. Specifically, we eliminate reliance on the algebraic group model and instead base our proof on the ARSDH assumption (Lipmaa et al., EUROCRYPT 2024), thus strengthening the theoretical foundations of PeerDAS and enhancing confidence in its security.
Last updated:  2025-09-16
Fast Zero-Knowledge Argument System with Short Polynomial Using Direct Computation
Frank Y.C. Lu
We introduce a new transparent zero-knowledge argument system that will significantly reduce the size of the polynomial commitment scheme (PCS) that needs to be evaluated relative to the circuit size. In our protocol, committed input parameters are transformed into a format that the verifier can process directly, so the output of the circuit polynomial at some evaluation point can be directly computed by the verifier instead of asking the prover to run the expensive PCS protocol on circuit polynomial(s) as big as the circuit size. In the default setting, the prover runtime cost for group exponentiation operations is only the square root of the degree ($\sqrt{p_d}$) of the polynomial the circuit generates. This direct computation approach brings many additional benefits. For example, it allows inputs and outputs of a circuit to be in the same commitment format. This will open up new use cases such as creating an openly accessible database with data perfectly hidden using commitments, but still allowing anyone to validate updates to the data with zero-knowledge.  Our benchmark result shows our approach can significantly improve both proving and verification speed over the state-of-the-art by one order of magnitude for all types of circuits of any depth, while keeping the communication cost competitive. (code is included in the attachment file). Our approach also allows our protocol to be made memory-efficient while being non-interactive. The theoretical memory cost of our protocol is $O(b + s)$, where $s = b = \sqrt{p_d}$ in the default setting.
Last updated:  2025-09-16
pod: An Optimal-Latency, Censorship-Free, and Accountable Generalized Consensus Layer
Orestis Alpos, Bernardo David, Jakov Mitrovski, Odysseas Sofikitis, and Dionysis Zindros
This work addresses the inherent issues of high latency in blockchains and low scalability in traditional consensus protocols. We present pod, a novel notion of consensus whose first priority is to achieve the physically-optimal latency of $2\delta$, or one round-trip, i.e., requiring only one network trip (duration $\delta$) for writing a transaction and one for reading it. To accomplish this, we first eliminate inter-replica communication. Instead, clients send transactions directly to all replicas, which independently process transactions and append them to local logs. Replicas assigns a timestamp and a sequence number to each transaction in their logs, allowing clients to extract valuable metadata about the transactions and the system state. Later on, clients retrieve these logs and extract transactions (and associated metadata) from them. Necessarily, this construction achieves weaker properties than a total-order broadcast protocol, due to existing lower bounds. Our work models the primitive of pod and defines its security properties. We then show pod-core, a protocol that satisfies properties such as transaction confirmation within $2\delta$, censorship resistance against Byzantine replicas, and accountability for safety violations. We show that single-shot auctions can be realized using the pod notion and observe that it is also sufficient for other popular applications.
Last updated:  2025-09-16
Starfighters—On the General Applicability of X-Wing
Deirdre Connolly, Kathrin Hövelmanns, Andreas Hülsing, Stavros Kousidis, and Matthias Meijers
In this work, we present a comprehensive analysis of QSF, the KEM combiner used by X-Wing (Communications in Cryptology 1(1), 2024). While the X-Wing paper focuses on the application of QSF to ML-KEM-768 and X25519, we discuss the combiner’s applicability to other post-quantum KEMs and ECDH instantiations. Particularly, we establish the compatibility of QSF to KEMs based on variants of the Fujisaki-Okamoto transform by proving ciphertext second-preimage resistance (C2PRI) for these variants. Building on these results, we show that QSF is compatible with, to the best of our knowledge, all post-quantum KEMs currently standardized or considered for standardization—including ML-KEM, (e)FrodoKEM, HQC, Classic McEliece, and various NTRU variants. Notably, this means these schemes can be used with QSF to construct PQ/T hybrid KEMs. In addition, we introduce QSI, a variant of QSF that combines two KEMs by hashing their shared keys, yielding a KEM that is IND-CCA-secure as long as one constituent KEM is IND-CCA-secure and the other is C2PRI-secure. We establish the same compatibility results for QSI as for QSF. Finally, we analyze both QSF and QSI regarding (their preservation of) the recently introduced family of binding properties for KEMs.
Last updated:  2025-09-16
Modular Forms and Hecke Operators for Post-Quantum Cryptography
Trey Li
We introduce modular forms and Hecke operators to cryptography and propose the Hecke problem as a new foundation for post-quantum cryptography. Given two modular forms, the Hecke problem asks to recover the Hecke operator that maps one to the other. While there is a deep relation to isogeny problems through the modularity theorem, this problem is rooted in arithmetic geometry and differs fundamentally in structure and mechanism. We prove NP-hardness of this problem and use it to construct a non-interactive key exchange scheme that achieves higher efficiency than isogeny-based schemes and smaller key sizes than lattice-based and code-based schemes.
Last updated:  2025-09-16
SoK: Connecting the Dots in Privacy-Preserving ML - Systematization of MPC Protocols and Conversions Between Secret Sharing Schemes
Martin Zbudila, Ajith Suresh, Hossein Yalame, Omid Mirzamohammadi, Aysajan Abidin, and Bart Preneel
Privacy-preserving machine learning (PPML) has become increasingly important due to the need to protect sensitive data during training and inference. Secure multiparty computation (MPC) and homomorphic encryption (HE) have emerged as foundational technologies, enabling secure computation over private data. In this work, we provide a systematic comparative overview of MPC frameworks for PPML, focusing on protocols that introduce novel approaches rather than incremental improvements. Frameworks are analyzed based on computational and communication complexity, throughput, security guarantees, and applicability in small-party settings. Each underlying primitive in PPML is examined from an MPC perspective, highlighting its role and trade-offs. We also emphasize the diversity of secret-sharing schemes and associated interoperability challenges, proposing scheme conversions to facilitate efficient hybrid solutions. This Systematization of Knowledge guides researchers in identifying open problems and practitioners in selecting effective MPC-based frameworks for real-world PPML deployment.
Last updated:  2025-09-16
Superglue: Fast formulae for (2,2)-gluing isogenies
Max Duparc
Following Mumford's theory, theta structures on products of elliptic curves are induced by symmetries whose eigenvectors correspond to 4-torsion points on the Kummer line. These symmetries introduce a rich pattern of self-similarities within the theta structure that we exploit to enhance the computation of gluing isogenies. Focusing on the dimension-2 case, we show how theta structures can be computed projectively, thereby avoiding costly modular inversions. Moreover, by leveraging the sparsity of certain specific 4-torsion points and the action of the canonical 2-torsion points in the Kummer line, we derive new formulae for the evaluation of (2,2)-gluing isogenies. These formulae require significantly fewer precomputations and arithmetic operations than previous methods. Additionally, our formulae also support the evaluation of points on the quadratic twist at negligible additional cost, without requiring operations in an extended field.
Last updated:  2025-09-16
Password-Hardened Encryption Revisited
Ruben Baecker, Paul Gerhart, and Dominique Schröder
Passwords remain the dominant form of authentication on the Internet. The rise of single sign-on (SSO) services has centralized password storage, increasing the devastating impact of potential attacks and underscoring the need for secure storage mechanisms. A decade ago, Facebook introduced a novel approach to password security, later formalized in Pythia by Everspaugh et al. (USENIX'15), which proposed the concept of password hardening. The primary motivation behind these advances is to achieve provable security against offline brute-force attacks. This work initiated significant follow-on research (CCS'16, USENIX'17), including Password-Hardened Encryption (PHE) (USENIX'18, CCS'20), which was introduced shortly thereafter. Virgil Security commercializes PHE as a software-as-a-service solution and integrates it into its messenger platform to enhance security. In this paper, we revisit PHE and provide both negative and positive contributions. First, we identify a critical weakness in the original design and present a practical cryptographic attack that enables offline brute-force attacks -- the very threat PHE was designed to mitigate. This weakness stems from a flawed security model that fails to account for real-world attack scenarios and the interaction of security properties with key rotation, a mechanism designed to enhance security by periodically updating keys. Our analysis shows how the independent treatment of security properties in the original model leaves PHE vulnerable. We demonstrate the feasibility of the attack by extracting passwords in seconds that were secured by the commercialized but open-source PHE provided by Virgil Security. On the positive side, we propose a novel, highly efficient construction that addresses these shortcomings, resulting in the first practical PHE scheme that achieves security in a realistic setting. We introduce a refined security model that accurately captures the challenges of practical deployments, and prove that our construction meets these requirements. Finally, we provide a comprehensive evaluation of the proposed scheme, demonstrating its robustness and performance.
Last updated:  2025-09-16
Two-Key Variant of the Four-Round Cascading LRW1
Shreya Dey, Avijit Dutta, and Kazuhiko Minematsu
In EUROCRYPT'20, Bao et al. have proved that three rounds of cascaded LRW1 construction provide security up to $2^{2n/3}$ queries. However, in a recent work by Khairallah et al., it has been shown that the construction provides only birthday bound security via exhibiting a distinguishing attack on the construction, and thereby invalidating the claim of Bao et al. In an independent and contemporaneous work, Datta et al. have shown that four rounds of cascading of the $\textsf{LRW1}$ construction, dubbed as $\textsf{CLRW1}^4$—based on four independent keyed block ciphers—achieves $3n/4$-bit CCA security. In this paper, we have shown that a key reduced variant of the $\textsf{CLRW1}^4$ construction, dubbed as $\textsf{R}\mbox{-}\textsf{CLRW1}^4$ based on two independent keyed block ciphers, achieves $2n/3$-bit CCA security. The security proof of our construction relies on a heavy use of the H-Coefficient technique and non-trivial analysis in lower-bounding the real interpolation probability for good transcripts.
Last updated:  2025-09-16
Diffie–Hellman Key Exchange from Commutativity to Group Laws
Dung Hoang Duong, Youming Qiao, and Chuanqi Zhang
In Diffie–Hellman key exchange, the commutativity of power operations is instrumental in the agreement of keys. Viewing commutativity as a law in abelian groups, we propose Diffie–Hellman key exchange in the group action framework (Brassard–Yung, Crypto'90; Ji–Qiao–Song–Yun, TCC'19), for actions of non-abelian groups with laws. The security of this protocol is shown, following Fischlin, Günther, Schmidt, and Warinschi (IEEE S&P'16), based on a pseudorandom group action assumption. A concrete instantiation is proposed based on the monomial code equivalence problem.
Last updated:  2025-09-16
Honest Majority Constant-Round MPC with Linear Communication from One-Way Functions
Junru Li and Yifan Song
Secure multiparty computation (MPC) faces a fundamental efficiency trade-off between round complexity and communication complexity: without fully homomorphic encryption, protocols with constant round complexity (e.g., protocols based on garbled circuits) incur high communication cost, while communication-efficient approaches (e.g., protocols based on secret sharing) have round complexity linear in the depth of the circuit. In this work, we focus on reducing the communication complexity of constant-round MPC protocols in the honest majority setting. Existing results either rely on strong assumptions (e.g., random oracles, DDH, LPN) or incur high communication of $\Omega(|C|n^2\kappa)$ bits under one-way functions (OWFs). However, non-constant-round MPC protocols can achieve linear communication in the number of parties even with information-theoretic security. We resolve this gap by presenting the first constant-round honest majority MPC protocol with linear communication complexity of $O(|C|n\kappa + n^2\kappa^2+n^4\kappa)$ only from OWFs. We introduce novel techniques for computing garbled circuits via party virtualization and efficient local computation of virtual parties, which optimize the existing protocols on multiparty garbling. These allow us to overcome the $O(n^2\kappa)$ bit of communication per-gate bottleneck of prior protocols, matching the scalability of the best non-constant-round protocols in the same setting.
Last updated:  2025-09-16
New Limits for Homomorphic Encryption
Sven Schäge and Marc Vorstermans
We make progress on the foundational problem of determining the strongest security notion achievable by homomorphic encryption. Our results are negative. We prove that a wide class of semi-homomorphic public key encryption schemes (SHPKE) cannot be proven IND-PCA secure (indistinguishability against plaintext checkability attacks), a relaxation of IND-CCA security. This class includes widely used and versatile schemes like ElGamal PKE, Paillier PKE, and the linear encryption system by Boneh, Boyen, and Shacham (Crypto'04). Besides their homomorphic properties, these schemes have in common that they all provide efficient algorithms for recognizing valid ciphertexts (and public keys) from public values. In contrast to CCA security, where the adversary is given access to a decryption oracle, in a PCA attack, the adversary solely has access to an oracle that decides for a given ciphertext/plaintext pair $(c,m)$ if decrypting $c$ indeed gives $m$. Since the notion of IND-PCA security is weaker than IND-CCA security, it is not only easier to achieve, leading to potentially more efficient schemes in practice, but it also side-steps existing impossibility results that rule out the IND-CCA security. To rule out IND-PCA security we thus have to rely on novel techniques. We provide two results, depending on whether the attacker is allowed to query the PCA oracle after it has received the challenge (IND-PCA2 security) or not (IND-PCA1 security -- the more challenging scenario). First, we show that IND-PCA2 security can be ruled out unconditionally if the number of challenges is smaller than the number of queries made after the challenge. Next, we prove that no Turing reduction can reduce the IND-PCA1 security of SHPKE schemes with $O(\kappa^3)$ PCA queries overall to interactive complexity assumptions that support $t$-time access to their challenger with $t=O(\kappa)$. To obtain our second impossibility results, we develop a new meta-reduction-based methodology that can be used to tackle security notions where the attacker is granted access to a \emph{decision} oracle. This makes it challenging to utilize the techniques of existing meta-reduction-based impossibility results that focus on definitions where the attacker is allowed to access an inversion oracle (e.g. long-term key corruptions or a signature oracle). To obtain our result, we have to overcome several technical challenges that are entirely novel to the setting of public key encryption.
Last updated:  2025-09-16
Surtr: Transparent Verification with Simple yet Strong Coercion Mitigation
Rosario Giustolisi, Maryam Sheikhi Garjan, and Peter Browne Rønne
Transparent verification allows voters to directly identify their vote in cleartext in the final tally result. Both Selene and Hyperion offer this simple and intuitive verification method, and at the same time allow for coercion to be mitigated under the assumption that tally servers can privately notify voters of the keying material needed for verification. Subsequently, a voter can generate fake keying material to deceive a coercer. In this paper, we propose Surtr, a new scheme that enables transparent verification without requiring a private notification channel. This approach strengthens coercion mitigation, since a coercer can monitor the notification channel, and simplifies the process by eliminating the need for voters to generate fake keying material for the coercer.
Last updated:  2025-09-16
Fast Homomorphic Evaluation of LWR-based PRFs
Amit Deo, Marc Joye, Benoit Libert, Benjamin R. Curtis, and Mayeul de Bellabre
Certain applications of fully homomorphic encryption (such as transciphering, universal thresholdizers, PIR, and ORAM) require randomness while operating over encrypted data. This randomness has to beobliviously generated in the encrypted domain and remain encrypted throughout the computation. Moreover, it should be guaranteed that independent-looking random coins can be obliviously generated for different computations. In this work, we consider the homomorphic evaluation of pseudorandom functions (PRFs) with a focus on practical lattice-based candidates. In the homomorphic PRF evaluation setting, given a fully homomorphic encryption of the PRF secret key $\vec{s}$, it should be possible to homomorphically compute encryptions of PRF evaluations $\{ \text{PRF}_{\vec{s}}(x_i) \}_{i=1}^M$ for public inputs $\{ x_i\}_{i=1}^M$. We consider this problem for PRF families based on the hardness of the Learning-With-Rounding (LWR) problem introduced by Banerjee, Peikert and Rosen (Eurocrypt 2012). We build on the random-oracle variant of a PRF construction suggested by Banerjee et al. and demonstrate that it can be evaluated using only two sequential programmable bootstraps in the TFHE homomorphic encryption scheme. We also describe several modifications of this PRF---which we prove as secure as the original function---that support homomorphic evaluations using only one programmable bootstrap per slot. Numerical experiments were conducted using practically relevant FHE parameter sets from the TFHE-rs library. Our benchmarks show that a throughput of about 1000 encrypted pseudorandom bits per second (resp. 900 encrypted pseudorandom bits per second) can be achieved on an AWS hpc7a.96xlarge machine (resp. on a standard laptop with an Apple M2 chip), on a single thread. The PRF evaluation keys in our experiments have sizes roughly $40\%$ and $60\%$ of a bootstrapping key.
Last updated:  2025-09-16
Scalable zkSNARKs for Matrix Computations: A Generic Framework for Verifiable Deep Learning
Mingshu Cong, Sherman S. M. Chow, Siu Ming Yiu, and Tsz Hon Yuen
Sublinear proof sizes have recently become feasible in verifiable machine learning (VML), yet no approach achieves the trio of strictly linear prover time, logarithmic proof size and verification time, and architecture privacy. Hurdles persist because we lack a succinct commitment to the full neural network and a framework for heterogeneous models, leaving verification dependent on architecture knowledge. Existing limits motivate our new approach: a unified proof-composition framework that casts VML as the design of zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) for matrix computations. Representing neural networks with linear and non-linear layers as a directed acyclic graph of atomic matrix operations enables topology-aware composition without revealing the graph. Modeled this way, we split proving into a reduction layer and a compression layer that attests to the reduction with a proof of proof. At the reduction layer, inspired by reduction of knowledge (Crypto '23), root-node proofs are reduced to leaf-node proofs under an interface standardized for heterogeneous linear and non-linear operations. Next, a recursive zkSNARK compresses the transcript into a single proof while preserving architecture privacy. Complexity-wise, for a matrix expression with $M$ atomic operations on $n \times n$ matrices, the prover runs in $O(M n^2)$ time while proof size and verification time are $O(\log(M n))$, outperforming known VML systems. Honed for this framework, we formalize relations directly in matrices or vectors---a more intuitive form for VML than traditional polynomials. Our LiteBullet proof, an inner-product proof built on folding and its connection to sumcheck (Crypto '21), yields a polynomial-free alternative. With these ingredients, we reconcile heterogeneity, zero knowledge, succinctness, and architecture privacy in a single VML system.
Last updated:  2025-09-16
Card-Based Protocol Counting Connected Components of Graphs
Koji Nuida
Card-based cryptography is a research area for realizing cryptographic functionality, such as secure multiparty computation and zero-knowledge proofs, by using a deck of physical cards and/or other non-electrical tools. Motivated by zero-knowledge proofs for solutions in pencil puzzles, there is a direction of recent studies on card-based protocols to verify connectivity of a set of cells or edges on lattice-shaped boards. In this paper, we generalize the problem to counting connected components of a subset of the vertex set on any graph, and propose a card-based protocol for the problem.
Last updated:  2025-09-16
Secure Rate-Distortion-Perception Trade-Off with Side Information
Gustaf Åhlgren and Onur Günlü
Secure rate-distortion-perception (RDP) trade-offs are relevant for applications such as semantic compression, where the perceptual quality needs to be maximized. We study a framework for secure RDP over an ideal public communication channel in the presence of an eavesdropper, where the legitimate parties also have access to side information correlated with the source. The exact rate region for the secure RDP trade-off is established when both the encoder and the decoder have access to the side information. We then characterize an inner bound when only the decoder has access to the side information and establish the exact region for a special case. Moreover, we provide an RDP example to illustrate remarkable gains in communication rate due to common randomness, which is not possible to obtain for rate-distortion trade-offs. Our results show that binning-based schemes can achieve high perceptual quality, low distortion, and strong secrecy simultaneously, establishing the information-theoretic limits for next-generation trustworthy semantic compression systems.
Last updated:  2025-09-16
Post-Quantum Blockchain: Transition Landscape Amidst Evolving Complexity
Kigen Fukuda and Shin’ichiro Matsuo
The emergence of Cryptographically Relevant Quantum Com- puters (CRQCs) poses an existential threat to the security of contem- porary blockchain networks, which rely on public-key cryptography vul- nerable to Shor’s algorithm. While the need for a transition to Post- Quantum Cryptography (PQC) is widely acknowledged, the evolution of blockchains from simple transactional ledgers to complex, multi-layered financial ecosystems has rendered early, simplistic migration plans ob- solete. This paper provides a comprehensive analysis of the blockchain PQC migration landscape as it stands in 2025. We dissect the core tech- nical challenges, including the performance overhead of PQC algorithms, the loss of signature aggregation efficiency vital for consensus and scala- bility, and the cascading complexities within Layer 2 protocols and smart contracts. Furthermore, the analysis extends to critical operational and socio-economic hurdles, such as the ethical dilemma of dormant assets and the conflicting incentives among diverse stakeholders including users, developers, and regulators. By synthesizing ongoing community discus- sions and roadmaps for Bitcoin, Ethereum, and others, this work estab- lishes a coherent framework to evaluate migration requirements, aiming to provide clarity for stakeholders navigating the path toward a quantum- secure future.
Last updated:  2025-09-16
All Paths Lead to the Root
Théophile Brézot and Chloé Hébant
In an attempt to fix the defects of the definition of forward security for Symmetric Searchable Encryption (SSE) schemes, Amjad et al. [2] proposed injection security. This new security property is strictly stronger than most security properties known to date, which makes it particularly challenging to design schemes meeting its requirements. In this work, we show how it is possible to use trees to decorrelate the modification of an index from its effects, hence achieving injection security. In addition to being conceptually simple, our scheme features non-interactive, stateless and mutation-free search operations that allow supporting concurrent readers easily. Finally, the proposed reference implementation is efficient: both Insert and Search operations execute in milliseconds even when operating on an index with up to a million entries and volumes up to a thousand.
Last updated:  2025-09-15
QKD Oracles for Authenticated Key Exchange
Kathrin Hövelmanns, Daan Planken, Christian Schaffner, and Sebastian Verschoor
Authenticated Key Exchange (AKE) establishes shared ('symmetric') cryptographic keys which are essential for secure online communication. AKE protocols can be constructed from public-key cryptography like Key Encapsulation Mechanisms (KEMs). Another approach is to use Quantum Key Distribution (QKD) to establish a symmetric key, which uses quantum communication. Combining post-quantum AKE and QKD appropriately may provide security against quantum attacks even if only one of the two approaches turns out to be secure. We provide an extensive review of existing security analyses for combined AKE and their formal security models, and identify some gaps in their treatment of QKD key IDs. In particular, improper handling of QKD key IDs leads to Dependent-Key attacks on AKE. As our main conceptual contribution, we model QKD as an oracle that closely resembles the standard ETSI 014 QKD interface. We demonstrate the usability of our QKD oracle for cryptographic security analyses by integrating it into a prominent security model for AKE, called CK+ model, thereby obtaining a security model for combined AKE that catches Dependent-Key attacks. In this model, we formally prove security of a new protocol that combines QKD with a triple-KEM handshake. This is the first provably secure hybrid protocol that maintains information-theoretic security of QKD.
Last updated:  2025-09-15
A Knot-based Key Exchange protocol
Silvia Sconza and Arno Wildi
We propose a new key exchange protocol based on the Generalised Diffie-Hellman Key Exchange. In the latter, instead of using a group action, we consider a semigroup action. In our proposal, the semigroup is the set of oriented knots in $\mathbb{S}^3$ with the operation of connected sum. As a semigroup action, we choose the action of the semigroup on itself through the connected sum. For the protocol to work, we need to use knot invariants, which allow us to create the shared secret key starting from the same knot represented in two different ways. In particular, we use finite type invariants. The security of the protocol is guaranteed by the hardness of decomposing knots in the semigroup.
Last updated:  2025-09-15
Revisiting the Robustness of {(R/M)LWR} under Polynomial Moduli and its Applications
Haoxiang Jin, Feng-Hao Liu, Zhedong Wang, and Yang Yu
This work conducts a comprehensive investigation on determining the entropic hardness of (Ring/Module) Learning with Rounding (LWR) under polynomial modulus. Particularly, we establish the hardness of (M)LWR for general entropic secret distributions from (Module) LWE assumptions based on a new conceptually simple framework called rounding lossiness. By combining this hardness result and a trapdoor inversion algorithm with asymptotically the most compact parameters, we obtain a compact lossy trapdoor function (LTF) with improved efficiency. Extending our LTF with other techniques, we can derive a compact all-but-many LTF and PKE scheme against selective opening and chosen ciphertext attacks, solely based on (Module) LWE assumptions within a polynomial modulus. Additionally, we show a search-to-decision reduction for RLWR with Gaussian secrets from a new Rényi Divergence-based analysis.
Last updated:  2025-09-15
Uncompressing Dilithium's public key
Paco Azevedo Oliveira, Andersson Calle Viera, Benoît Cogliati, and Louis Goubin
The Dilithium signature scheme – recently standardized by NIST under the name ML-DSA – owes part of its success to a specific mechanism that allows an optimizaion of its public key size. Namely, among the data of the MLWE instance $\bf (A,\bf{t})$, which is at the heart of the construction of Dilithium, the least significant part of $\bf{t}$ -- denoted by $\bf{t}_0$ -- is not included in the public key. The verification algorithm had been adapted accordingly, so that it should not require the knowledge of $\bf{t}_0$. However, since it is still required to compute valid signatures, it has been made part of the secret key. The knowledge of $\bf{t}_0$ has no impact on the black-box cryptographic security of Dilithium, as can be seen in the security proof. Nevertheless, it does allow the construction of much more efficient side-channel attacks. Whether it is possible to recover $\bf{t}_0$ thus appears to be a sensitive question. In this work, we show that each Dilithium signature leaks information on $\bf{t}_0$, then we construct an attack that retrieves it from Dilithium signatures. Experimentally, depending on the Dilithium security level, between $200\,000$ and $500\,000$ signatures are sufficient to recover $\bf{t}_0$ on a desktop computer.
Last updated:  2025-09-15
Mixderive: A New Framework of Deriving Linear Approximations and Improved Differential-Linear Distinguishers for ChaCha
Zhengting Li, Lin Ding, Xinhai Wang, and Jiang Wan
ChaCha is a well-known ARX-based cipher and has become one of the most widely used ciphers in the real world. In this paper, a systematic three-case framework called \emph{Mixderive} to find linear approximations for ChaCha is proposed. By this new framework, new linear approximations for 3.5- and 4-round ChaCha are found, which are significantly better than the existing linear approximations proposed at EUROCRYPT 2021 and ASIACRYPT 2022. These improvements confirm the effectiveness of \emph{Mixderive}. In addition, new 2- and 2.5-round linear approximations for ChaCha are found by \emph{Mixderive}. Based on these new findings, new differential-linear distinguishers for 7- and 7.5-round ChaCha256 with complexities ${2^{162.28}}$ and ${2^{247.08}}$ are proposed, which improve the best known distinguishers by factors of ${2^{4.61}}$ and ${2^{4.46}}$, respectively. To the best of our knowledge, both cryptanalytic results are the best.
Last updated:  2025-09-15
Secure and efficient transciphering for FHE-based MPC
Diego F. Aranha, Antonio Guimarães, Clément Hoffmann, and Pierrick Méaux
Transciphering (or Hybrid-Homomorphic Encryption, HHE) is an es- tablished technique for avoiding ciphertext expansion in HE applications, saving communication and storage resources. Recently, it has also been shown to be a fundamental component in the practical construction of HE-based multi-party computation (MPC) protocols, being used both for input data and intermediary results (Smart, IMACC 2023). In these protocols, however, ciphers are used with keys that are jointly generated by multiple (possibly malicious) parties, which may require additional security assumptions that have been so far overlooked in the HHE literature. In this paper, we formalize this issue as a security against related-key attacks (RKA) problem and provide efficient solutions for it. We start by presenting an efficient method for homomorphically evaluating Mixed-Filter-Permutator (MFP) ciphers in leveled mode, enabling speedups of up to thousands of times compared to previous literature. For the multi-party scenario, we focus specifically on the Margrethe cipher (Hoffmann et al., INDOCRYPT 2023). We show that, contrary to other commonly used HHE ciphers (e.g. FLIP), Margrethe is out-of-the-box secure for any protocols that allow malicious parties to learn up to two related key streams, enabling security for the vast majority of static MPC protocols. For other cases, we quantify the loss of security based on the number of related key streams (which often depends on the number of malicious parties and specific protocol). Performance-wise, our implementation of Margrethe takes just 3.9 ms to transcipher 4 bit messages, being significantly faster than the state of the art in terms of latency.
Last updated:  2025-09-15
MPC for $Q_2$ Access Structures over Rings and Fields
Robin Jadoul, Nigel P. Smart, and Barry Van Leeuwen
We examine Multi-Party Computation protocols in the active-security-with-abort setting for $Q_2$ access structures over small and large finite fields $F_p$ and over rings $Z_{p^k}$. We give general protocols which work for any $Q_2$ access structure which is realised by a multiplicative Extended Span Program. We generalize a number of techniques and protocols from various papers and compare the different methodologies. In particular we examine the expected communication cost per multiplication gate when the protocols are instantiated with different access structures.
Last updated:  2025-09-15
Solving Concealed ILWE and its Application for Breaking Masked Dilithium
Simon Damm, Asja Fischer, Alexander May, Soundes Marzougui, Leander Schwarz, Henning Seidler, Jean-Pierre Seifert, Jonas Thietke, and Vincent Quentin Ulitzsch
Lattice-based signatures like Dilithium (ML-DSA) prove knowledge of a secret key $s \in \mathbb{Z}_n$ by using Integer LWE (ILWE) samples $z = \langle \vec c, \vec s \rangle +y $, for some known hash value $c \in \mathbb{Z}_n$ of the message and unknown error $y$. Rejection sampling guarantees zero-knowledge, which makes the ILWE problem, that asks to recover s from many z’s, unsolvable. Side-channel attacks partially recover y, thereby obtaining more informative samples resulting in a—potentially tractable—ILWE problem. The standard method to solve the resulting problem is Ordinary Least Squares (OLS), which requires independence of $y$ from $\langle c, s \rangle$ —an assumption that is violated by zero-knowledge samples. We present efficient algorithms for a variant of the ILWE problem that was not addressed in prior work, which we coin Concealed ILWE (CILWE). In this variant, only a fraction of the ILWE samples is zero-knowledge. We call this fraction the concealment rate. This ILWE variant naturally occurs in side-channel attacks on lattice-based signatures. A case in point are profiling side-channel attacks on Dilithium implementations that classify whether $y = 0$. This gives rise to either zero-error ILWE samples $z = \langle c, s \rangle$ with $y = 0$ (in case of correct classification), or ordinary zero-knowledge ILWE samples (in case of misclassification). As we show, OLS is not practical for CILWE instances, as it requires a prohibitively large amount of samples for even small (under 10%) concealment rates. A known integer linear programming-based approach can solve some CILWE instances, but suffers from two short-comings. First, it lacks provable efficiency guarantees, as ILP is NP-hard in the worst case. Second, it does not utilize small, independent error $y$ samples, that could occur in addition to zero-knowledge samples. We introduce two statistical regression methods to cryptanalysis, Huber and Cauchy regression. They are both efficient and can handle instances with all three types of samples. At the same time, they are capable of handling high concealment rates, up to 90% in practical experiments. While Huber regression comes with theoretically appealing correctness guarantees, Cauchy regression performs best in practice. We use this efficacy to execute a novel profiling attack against a masked Dilithium implementation. The resulting ILWE instances suffer from both concealment and small, independent errors. As such, neither OLS nor ILP can recover the secret key. Cauchy regression, however, allows us to recover the secret key in under two minutes for all NIST security levels.
Last updated:  2025-09-15
Link Between the Differential Cryptanalysis and Linear Approximations over Finite Abelian Groups And Its Applications
Zhongfeng Niu, Siwei Sun, Hailun Yan, and Qi Wang
In recent years, progress in practical applications of multi-party computation (MPC), fully homomorphic encryption (FHE), and zero-knowledge proofs (ZKP) motivates people to explore symmetric-key cryptographic algorithms, as well as corresponding cryptanalysis techniques (such as differential cryptanalysis, linear cryptanalysis), over finite Abelian groups or prime fields $\mathbb{F}_p$ for large $p$. In this paper, we establish the links between linear cryptanalysis and differential cryptanalysis over general finite Abelian groups. As the first application, we revisit linear cryptanalysis and give general results of linear approximations over arbitrary finite Abelian groups. More precisely, we consider the {\em linearity}, which is the maximal non-trivial linear approximation, to characterize the resistance of a function against linear cryptanalysis.This thereby generalizes the work of Pott in 2004 and completes the generalization of Sidelnikov–Chabaud–Vaudenay’s bound from $\mathbb{F}_2^n$ to finite Abelian groups. As the second application, we give an exact expression for the correlation of differential-linear approximations over arbitrary finite Abelian groups ($\mathbb{F}_p^n$) under the sole assumption that the two parts of the cipher are independent of each other. In particular, we completely generalize the differential-linear cryptanalysis from $\mathbb{F}_2^n$ to arbitrary finite Abelian groups ($\mathbb{F}_p^n$).
Last updated:  2025-09-15
Enabling Puncturable Encrypted Search over Lattice for Privacy-Preserving in Mobile Cloud
Yibo Cao, Shiyuan Xu, Gang Xu, Xiu-Bo Chen, Zongpeng Li, Jiawen Kang, and Dusit Niyato
Searchable encryption (SE) has been widely studied for mobile cloud computing, allowing data encrypted search. However, existing SE schemes cannot support the fine-grained searchability revocation. Puncturable encryption (PE) can revoke the decryption ability for a specific message, which can potentially alleviate this issue. Moreover, the threat of quantum computing remains an important concern, leading to privacy leakage in the mobile cloud. Consequently, designing a post-quantum puncturable encrypted search scheme is still far-reaching. In this paper, we propose PunSearch, the first puncturable encrypted search scheme over lattice for data privacy-preserving in mobile cloud. PunSearch provides a fine-grained searchability revocation while enjoying quantum safety. Different from existing PE schemes, we construct a novel trapdoor generation mechanism through evaluation algorithms and the lattice basis sampling technique. We then design a search permission verification method to revoke the searchability for specific keywords. Furthermore, we formulate a new IND-Pun-CKA model and utilize it to analyze the security of PunSearch. Comprehensive performance evaluation indicates that the computational overheads of Encrypt, Trapdoor, Search, and Puncture algorithms in PunSearch are just 0.064, 0.005, 0.050, and 0.311 times of other prior arts, respectively under the best cases. These results demonstrate that PunSearch is effective and secure in mobile cloud computing.
Last updated:  2025-09-15
Post-Quantum Cryptography in Practice: A Literature Review of Protocol-Level Transitions and Readiness
Obianuju Egbuagha and Emmanuel Ikwunna
This paper presents a structured literature review of ongoing global efforts to integrate post-quantum cryptography (PQC) into widely deployed communication and identity protocols. We analyze current readiness, standardization initiatives, hybrid cryptographic approaches, and deployment challenges across multiple layers of the protocol stack, including TLS, SSH, VPNs, certificate infrastructure, and messaging protocols. The report also discusses hybrid cryptographic strategies, current deployment efforts, and the technical challenges facing real-world implementation, including performance, interoperability, and resistance to side-channel attacks. With insights from recent research, industry trials, and open source tools, the report aims to provide a clear and accessible overview of the growing role of PQC in securing the future of digital communication. We aim to guide researchers, developers, and policymakers in understanding the state of PQC integration and encourage broader involvement in the testing, implementation, and evaluation of next-generation cryptographic solutions.
Last updated:  2025-09-15
Collusion-Resilience in Transaction Fee Mechanism Design
Hao Chung, Tim Roughgarden, and Elaine Shi
Users bid in a transaction fee mechanism (TFM) to get their transactions included and confirmed by a blockchain protocol. Roughgarden (EC'21) initiated the formal treatment of TFMs and proposed three requirements: user incentive compatibility (UIC), miner incentive compatibility (MIC), and a form of collusion-resilience called OCA-proofness. Ethereum's EIP-1559 mechanism satisfies all three properties simultaneously when there is no contention between transactions, but loses the UIC property when there are too many eligible transactions to fit in a single block. Chung and Shi (SODA'23) considered an alternative notion of collusion-resilience, called $c$-side-contract-proofness ($c$-SCP), and showed that, when there is contention between transactions, no TFM can satisfy UIC, MIC, and $c$-SCP for any $c\geq 1$. OCA-proofness asserts that the users and a miner should not be able to ``steal from the protocol.'' On the other hand, the $c$-SCP condition requires that a coalition of a miner and a subset of users should not be able to profit through strategic deviations (whether at the expense of the protocol or of the users outside the coalition). Our main result is the first proof that, when there is contention between transactions, no (possibly randomized) TFM in which users are expected to bid truthfully satisfies UIC, MIC, and OCA-proofness.This result resolves the main open question in Roughgarden (EC'21). We also suggest several relaxations of the basic model that allow our impossibility result to be circumvented.
Last updated:  2025-09-15
Persistence of Hourglass(-like) Structure: Improved Differential-Linear Distinguishers for Several ARX Ciphers
Xinxin Gong, Qingju Wang, Yonglin Hao, Lin Jiao, and Xichao Hu
The ARX structure plays a crucial role in symmetric-key primitives, with differential-linear (DL) attacks being among the most effective cryptanalysis techniques against ARX ciphers. In this paper, we present a systematic re-decomposition technique for DL distinguishers of ARX ciphers and identify for the first time the hourglass(-like) structural commonalities among optimal DL distinguishers searched out by various deduction techniques, also supported through comprehensive experiments, which motivate us to develop an efficient and generalized approach to construct optimal hourglass(-like) structural DL distinguishers. Our method yields significant advances when applied to \speck, \alzette, and the underlying permutations of \siphash{} and \chaskey: (1) the first 11- to 14-round DL distinguishers of \alzette; (2) the first (valid) DL distinguishers for 11-round \speck32, 12-round \speck48, and 16-round \speck96; (3) deterministic (correlation $\pm1$) 3-round DL distinguishers for \siphash-2-4 and significantly improved 4-round ones. All these distinguishers are equipped with both theoretical and experimental verifications. We further analyze ARX-based Latin dance stream ciphers, achieving improved DL distinguishers for 7/7.25-round \chacha, 8-round \salsa, and 5.5-round \forro. Though some of the improvements are not significant, we have verified the effectiveness of our method across a broader range of instances. This work provides new insights for DL distinguisher construction and enhances understanding of the security of ARX ciphers.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.