Séminaire C2 du vendredi 10 juin à Rennes

Le prochain séminaire C2 se tiendra en présentiel le vendredi 10 juin 2022 à l’université de Rennes 1 en salle Petri Turing de l’IRISA sur le campus de Beaulieu

Lien visio à venir

Un buffet est prévu. Afin d’en planifier la bonne organisation, merci de remplir l’eventohttps://evento.renater.fr/survey/presence-buffet-au-s…-kby7ubdp

Le programme est le suivant

  • 10h00, Clara Pernot, INRIA Paris, New Representations of the AES Key Schedule

Résumé : In this talk we present a new representation of the AES key schedule, with some implications to the security of AES-based schemes. In particular, we show that the AES-128 key schedule can be split into four independent parallel computations operating on 32 bits chunks, up to linear transformation. Surprisingly, this property has not been described in the literature after more than 20 years of analysis of AES. We show two consequences of our new representation, improving previous cryptanalysis results of AES-based schemes. First, we observe that iterating an odd number of key schedule rounds results in a function with short cycles. This explains an observation of Khairallah on mixFeed, a second-round candidate in the NIST lightweight competition. Our analysis actually shows that his forgery attack on mixFeed succeeds with probability 0.44 (with data complexity 220GB), breaking the scheme in practice. The same observation also leads to a novel attack on ALE, another AES-based AEAD scheme.

Our new representation also gives efficient ways to combine information from the first subkeys and information from the last subkeys, in order to reconstruct the corresponding master keys. In particular we improve previous impossible differential attacks against AES-128.

This is a joint work with Gaëtan Leurent.

  • 11h30, Thibauld Feneuil, CryptoExperts et Sorbonne Université –   Syndrome Decoding in the Head – Shorter Signatures from Zero-Knowledge proofs

Résumé : In this talk, I will present a new zero-knowledge proof of knowledge for the syndrome decoding (SD) problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. Specifically, we propose a 5-round zero-knowledge protocol that proves the knowledge of a vector x such that y=Hx and wt(x)<= w and which achieves a soundness error closed to 1/N for an arbitrary N.

While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for 128-bit security when relying on the hardness of the SD problem on binary fields. Using larger fields (like F_{256}), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and to be competitive with SPHINCS+, both post-quantum signature candidates in the ongoing NIST standardization effort. Since the security relies on the hardness of the syndrome decoding problem for random linear codes which is known to be NP-hard and for which the cryptanalysis state of the art has been stable for many years, it results in a conservative signature scheme. Moreover, our scheme outperforms all the existing code-based signature schemes for the common « signature size + public key size » metric.

Joint work with Antoine Joux and Matthieu Rivain.

  • 13h30 : Fangan Yssouf Dosso, Ecole des Mines de Saint-Etienne, Département SAS – PMNS for efficient arithmetic and small memory cost

Résumé : The Polynomial Modular Number System (PMNS) is an integer number system which aims to speed up arithmetic operations modulo a prime p. Such a system is defined by a tuple (p, n, g, r, E), where p, n, g and r are positive integers, E is a monic polynomial with integer coefficients, having g as a root modulo p. Most of the work done on PMNS focus on polynomials E such that E(X) = X^n – l, where l is a nonzero integer, because such a polynomial provides efficiency and low memory cost, in particular for l = 2 or -2.

It however appeared that these are not always the best choices. In this presentation, we first start with the necessary background on PMNS. Then, we highlight a new set of polynomials E for very efficient operations in the PMNS and low memory requirement, along with new bounds and parameters. We show that these polynomials are more interesting than (most) polynomials E(X)=X^n – l. To finish, we see how to use PMNS to randomise arithmetic operations in order to randomise high level operations like elliptic curve scalar multiplication, to protect implementations against some advanced side channel attacks like differential power analysis (DPA).

Joint work with Jean-Marc Robert and Pascal Véron

  • 15h00 : Ivan Pogildiakov,  IRMAR, Université Rennes 1 –  Binary codes, hyperelliptic curves, and the Serre bound

Résumé : TBA

Séminaire C2 du vendredi 17 décembre à Rennes – Reporté cause COVID

Le prochain séminaire C2 se tiendra en présentiel le vendredi 17 décembre 2021 à l’université de Rennes 1 dans l’amphithéâtre Métivier de l’IRISA sur le campus de Beaulieu

Lien visio à venir

Le programme est le suivant

  • 9h30, Aurore Guillevic, Inria Nancy and Aarhus University – Elliptic curves for z-SNARKs

Joint work with Youssef El Housni, Consensys, Inria Saclay, ÉcolePolytechnique, and Diego Aranha, Aarhus University

Résumé : SNARK stands for Succinct Non-interactive ARgument of Knowledge and is a powerful tool in blockchains and cryptocurrencies. The most efficient at the moment is Groth’16 preprocessing SNARK, implemented with a pairing-friendly elliptic curve with a subgroup of prime order q of 255 bits (the BLS12-381 curve). It provides a zero-knowledge proof of an arithmetic circuit that performs arithmetic operations modulo q.

For a recursive SNARK, one applies again the strategy of arithmetisation and proof: a compiler produces an arithmetic circuit that performs some cryptographic computation (in Groth’16: computing pairings and checking an equation). This second circuit deals with arithmetic operations not modulo q but modulo p, the field of definition of the first curve. A second pairing-friendly elliptic curve is required, whose order should be this prime p: this is a chain of elliptic curves.

To achieve a recursive SNARK, and recursively prove proofs (a.k.a. bootstrapping proofs), in 2014, Ben-Sasson, Chiesa, Tromer, and Virza introduced a cycle of pairing-friendly curves: an MNT curve of prime
order q defined over a field GF(p), and a second MNT curve of prime order p defined over the prime field GF(q). That is, the field of definition of each curve is the order of the other curve. However, generating such curves takes a large computational effort: [BCTV14] spanned all curve discriminants up to 10^15 in 610 000 core-hours on a cluster to obtain two cycles: one of 298 bits, and a second one of 753 bits. Moreover, the pairing takes its values in a quartic extension for the first curve (1192 and 3012 bits respectively), which does not provide 128 bits of security anymore because of the TNFS algorithm computing discrete logarithms in extension fields (De Micheli et al. at Asiacrypt’21 present the first implementation). While a cycle of pairing-friendly curves is optimal for a recursive SNARK construction, it is very slow because of the many constraints on the curves. To date, no other cycle of pairing-friendly curves is known.

To circumvent the slowness of such cycles, one can instead use a chain of curves: the second (outer) curve has a subgroup of prime order being the field of definition of the first (inner) curve. Such a construction is much easier, and more pairing-friendly curves are possible. We generalise these constructions to families of SNARK-friendly 2-chains of pairing-friendly elliptic curves and compare their efficiency.

Finally, other cycles, half-cycles or chains of curves arise in variants of zero-knowledge proofs: cycles of non-pairing friendly curves such as Pallas-Vesta, half-pairing-friendly cycles such as Pluto-Eris. If time allows it, an overview of these alternative curves will be given.

  • 10h45, Alexandre Wallet, Univ Rennes, CNRS, IRISA –   Vers le déploiement des schémas « Hash-and-sign » basés sur les réseaux Euclidiens

Résumé : La compétition de standardisation post-quantique du NIST arrive bientôt à sa fin. Parmi les trois finalistes principaux retenus pour les signatures numériques, on retrouve la proposition Falcon. Ce schéma correspond à une instanciation efficace du paradigme « GPV » (Gentry-Peikert-Vaikunthanatan), permettant de proposer des schémas de signatures numériques reposant sur la difficulté de problèmes de réseaux euclidiens structurés. En terme de vitesse et particulièrement de bande passante, Falcon semble surclasser ses concurrents directs. Cependant, le schéma souffre de limites conceptuelles pouvant freiner son utilisation dans différents scénarios : 1) les jeux de paramètres sont très restreints ; 2) implémenter le schéma est un exercice délicat ; 3) protéger l’implémentation, au moins vis-à-vis des « timing attacks »,
impacte sérieusement les performances du schéma (en plus d’être, là encore, techniquement exigeant).Dans cet exposé, je décrirai plus en détails les briques limitantes du schéma, principalement liées à l’échantillonnage de vecteurs Gaussiens dans des réseaux euclidiens. En effet, dans le cas de Falcon, c’est cette étape algorithmique qui décide de la sécurité théorique et concrète du schéma, mais c’est aussi une source de fuites du point de vue des attaques par canaux auxiliaires. Je présenterai ensuite Mitaka, une proposition alternative suivant le design de Falcon, permettant de s’affranchir des écueils précédents. Ainsi, notre schéma Mitaka est 1) conceptuellement plus simple, et instanciable via de grandes familles de paramètres ; 2) sensiblement plus rapide pour une sécurité et une consommation de bande passante équivalente ; 3) peut être masqué à un ordre arbitraire pour un coût modeste et avec des techniques standards. Enfin, il est possible d’implémenter Mitaka sans avoir recourt à de l’arithmétique en virgule flottante.

L’exposé sera inspiré de plusieurs travaux en collaboration avec T. Espitau, P.A. Fouque, F. Gérard, P. Kirchner, M. Rossi, A. Takahashi, M. Tibouchi et Y. Yu.

  • 14h00 : Clara Pernot, INRIA Paris, New Representations of the AES Key Schedule

Résumé : In this talk we present a new representation of the AES key schedule, with some implications to the security of AES-based schemes. In particular, we show that the AES-128 key schedule can be split into four independent parallel computations operating on 32 bits chunks, up to linear transformation. Surprisingly, this property has not been described in the literature after more than 20 years of analysis of AES. We show two consequences of our new representation, improving previous cryptanalysis results of AES-based schemes. First, we observe that iterating an odd number of key schedule rounds results in a function with short cycles. This explains an observation of Khairallah on mixFeed, a second-round candidate in the NIST lightweight competition. Our analysis actually shows that his forgery attack on mixFeed succeeds with probability 0.44 (with data complexity 220GB), breaking the scheme in practice. The same observation also leads to a novel attack on ALE, another AES-based AEAD scheme.

Our new representation also gives efficient ways to combine information from the first subkeys and information from the last subkeys, in order to reconstruct the corresponding master keys. In particular we improve previous impossible differential attacks against AES-128.This is a joint work with Gaëtan Leurent.

  • 15h15 : Victor Mollimard,  Univ Rennes, CNRS, IRISA – Efficient Methods to Search for Best Differential Characteristics on SKINNY

Résumé : Evaluating resistance of ciphers against differential cryptanalysis is essential to define the number of rounds of new designs and to mount attacks derived from differential cryptanalysis. In this paper, we propose automatic tools to find the best differential characteristics on the SKINNY block cipher. As usually done in the literature, we split this search in two stages denoted by Step 1 and Step 2. In Step 1, we aim at finding all truncated differential characteristics with a low enough number of active Sboxes. Then, in Step 2, we try to instantiate each difference value while maximizing the overall differential characteristic probability. We solve Step 1 using an ad-hoc method inspired from the work of Fouque et al. whereas Step 2 is modelized for the Choco-solver library as it seems to outperform all previous methods on this stage. Notably, for SKINNY-128 in the SK model and for 13 rounds, we retrieve the results of Abdelkhalek et al. within a few seconds (to compare with 16 days) and we provide, for the first time, the best differential related-tweakey characteristics up to 14 rounds for the TK1 model. Regarding the TK2 and the TK3 models, we were not able to test all the solutions Step 1, and thus the differential characteristics we found up to 16 and 17 rounds are not necessarily optimal.

Séminaire C2 du vendredi 15 octobre 2021

Le prochain séminaire C2 se tiendra en présentiel le vendredi 15 octobre 2021 à l’INRIA Paris, 2 rue Simone Iff, Paris 12.

Le lien pour joindre la réunion en visio est :

Le programme est le suivant

  • 10h00, Katharina Boudgoust, IRISA, Université Rennes 1, Hardness of Module Learning With Errors With Small Secrets

Résumé: This talk focuses on the module variant of the Learning With Errors problem (Module-LWE), a fundamental computational problem used in lattice-based cryptography nowadays. To highlight its importance for practical schemes, note that several finalists of the NIST standardization project rely on the hardness of Module-LWE, e.g., the signature scheme Dilithium and the key encapsulation mechanism Kyber.

More concretely, in this talk we consider Module-LWE, where the underlying secret is of small norm. Whereas several works prove the hardness of unstructured LWE with short secrets, only few was known for the module case. In this presentation I summarize our results that previously appeared at Asiacrypt 2020 and CT-RSA 2021, where we investigate the hardness of Module-LWE when the secret has coefficients in the set {0,1}, also called binary Module-LWE. Furthermore, I show that both results naturally generalize to secrets with bounded coefficients. The talk concludes with some interesting open questions related to the hardness of Module-LWE with non-uniform secrets.

This is a joint work with Corentin Jeuy, Adeline Roux-Langlois and Weiqiang Wen.

  • 11h15, Magali Bardet, LITIS, Université de Rouen Normandie, Algebraic cryptanalysis of problems in code-based cryptography and applications

Abstract: Rank-metric code-based cryptography relies on the hardness of decoding a random linear code in the rank metric. This fundamental problem is called the Minrank problem, and is ubiquitous in rank metric (or even Hamming metric) code based cryptography as well as in multivariate cryptography. For structured instances arising in the former, their security rely on a more specific problem, namely the Rank Syndrome Decoding problem. There is also a generalization called the Rank Support Learning problem, where the attacker has access to several syndromes corresponding to errors with the same support. Those problems have various applications in code-based and multivariate cryptography (KEM and signature schemes), and a precise understanding of the complexity of solving them can help designers to create secure parameters.

In this talk, I will present the three problems and their relations to cryptographic schemes, their algebraic modeling and the recent improvements in the understanding of the complexity of solving those systems using algebraic techniques like Gröbner bases computations.

Joint works with Pierre Briaud, Maxime Bros, Daniel Cabarcas, Philippe Gaborit, Vincent Neiger, Ray Perlner, Olivier Ruatta, Daniel Smith-Tone, Jean-Pierre Tillich and Javier Verbel.

  • 13h45 : Antonin Leroux, Grace, LIX et INRIA Saclay–Île-de-France, Séta: Supersingular Encryption from Torsion Attacks

Résumé : We present Séta, a new family of public-key encryption schemes with post-quantum security based on isogenies of supersingular elliptic curves. It is constructed from a new family of trapdoor one-way functions, where the inversion algorithm uses Petit’s so called torsion attacks on SIDH to compute an isogeny between supersingular elliptic curves given an endomorphism of the starting curve and images of torsion points. We prove the OW-CPA security of S´eta and present an IND-CCA
variant using the post-quantum OAEP transformation. Several variants for key generation are explored together with their impact on the selection of parameters, such as the base prime of the scheme. We furthermore formalise an “uber” isogeny assumption framework which aims to generalize computational isogeny problems encountered in schemes including SIDH, CSDIH, OSIDH and ours. Finally, we carefully select parameters to achieve a balance between security and run-times and present experimental results from our implementation

  • 15h00 : Chloé Hébant, Cascade, ENS et INRIA Paris  Traceable Constant-Size Multi-Authority Credentials,

Abstract: Many attribute-based anonymous credential (ABC) schemes have been proposed allowing a user to prove the possession of some attributes, anonymously. They became more and more practical with, for the most recent papers, a constant-size credential to show a subset of attributes issued by a unique credential issuer. However, proving possession of attributes coming from K different credential issuers usually requires K independent credentials to be shown. Only attribute-based credential schemes from aggregatable signatures can overcome this issue.

In this paper, we propose new ABC schemes from aggregatable signatures with randomizable tags. We consider malicious credential issuers, with adaptive corruptions and collusions with malicious users. Whereas our constructions only support selective disclosures of attributes, to remain compact, our approach significantly improves the complexity in both time and memory of the showing of multiple attributes: for the first time, the cost for the prover is (almost) independent of the number of attributes and the number of credential issuers.

Whereas anonymous credentials require privacy of the user, we propose the first schemes allowing traceability.

We formally define an aggregatable signature scheme with (traceable) randomizable tags, which is of independent interest. We build concrete schemes from the recent linearly homomorphic signature scheme of PKC 20. As all the recent ABC schemes, our construction relies on signatures which unforgeability is proven in the bilinear generic group model.

Séminaire C2 du vendredi 4 juin 2021

Programme du séminaire du vendredi 4 juin 2021

    • 10h00 – Jade Nardi, INRIA, Preuves interactives de proximité d’un mot à un code géométrique
Résumé : Ces dernières années, d’énormes progrès ont été accomplis dans le domaine du calcul vérifiable. Celui-ci permet de fournir, en plus d’un résultat, une preuve courte et rapidement vérifiable que le calcul a été correctement effectué. Certaines de ces preuves peuvent également avoir une propriété « zero-knowledge », et sont aujourd’hui largement déployées dans des applications liées aux blockchains.
Parmi les différentes familles de schémas de calcul vérifiable, une famille de constructions repose fondamentalement sur des tests de proximité à des codes algébriques. Les constructions implémentées dans l’industrie utilisent un protocole interactif, appelé FRI (Fast Reed-Solomon Interactive Oracle Proof of Proximity), permettant de vérifier la proximité à des codes de Reed-Solomon en temps logarithmique (en la longueur du code).
Naturellement, on se demande si les codes algébriques, généralisation des codes de Reed-Solomon, admettent des tests de proximité avec des paramètres similaires. Dans cet exposé, on présentera le protocole FRI en détail, en prenant du recul sur les outils mathématiques qui font son efficacité. On décrira comment généraliser ces outils aux codes géométriques (qu’on prendra le temps d’introduire rapidement), quels obstacles apparaissent naturellement à cause de la géométrie des courbes non rationnelles et comment les surmonter en adaptant le protocole. On détaillera le cas de codes sur des extensions de Kummer.


    • 11h00 – David Pointcheval, ENS, Evaluation Secrète de Polynômes Vérifiable 
Résumé : L’évaluation secrète de polynômes (Oblivious Polynomial Evaluation) permet à Alice d’obtenir l’évaluation d’un polynôme connu de Bob seul sur une entrée secrète de son choix. Les applications sont nombreuses, telles que l’intersection d’ensembles privés (Private Set Intersection). Et dans ce cas, il est d’ailleurs nécessaire de s’assurer que le même polynôme est utilisé par Bob pour chaque évaluation.
Dans cet exposé, nous montrons comment prouver l’évaluation correcte d’un polynôme secret, mis en gage par Bob, sur une entrée chiffrée d’Alice, à l’aide de chiffrement complètement homomorphe (Fully Homomorphic Encryption). La preuve est compacte, et indépendante du degré du polynôme à évaluer.
Il s’agit d’un travail commun avec Paola de Perthuis, Malika Izabachène et Anca Nitulescu

Séminaire C2 du vendredi 19 mars 2021

Programme du séminaire

  • 10h00 -Mélissa Rossi, ANSSI, Assessing residual security of lattice-based cryptography
Abstract : This talk will present a framework for cryptanalysis of lattice-based schemes, when side information —in the form of «hints»— about the secret is available. This presentation outlines a joint work with Dana Dachman-Soled, Léo Ducas and Huijing Gong that was presented in CRYPTO 2020 (https://eprint.iacr.org/2020/292).
This framework generalizes the primal lattice reduction attack, and allows the progressive integration of hints before running a final lattice reduction step. The techniques for integrating hints include sparsifying the lattice, projecting onto and intersecting with hyperplanes, and/or altering the distribution of the secret vector. The main contribution is to propose a toolbox and a methodology to integrate such hints into lattice reduction attacks and to predict the performance of those lattice attacks with side information.
While initially designed for side-channel information, this framework can also be used in other cases. For example, one can simply exploit constraints imposed by certain schemes (LAC, Round5, NTRU). Besides, I will present a way to use this framework combined with decryption failures information using a joint work with Jan-Pieter D’Anvers and Fernando Virdia presented in EUROCRYPT 2020 (https://eprint.iacr.org/2019/1399).


  • 11h00André Schrottenloher, CWI Amsterdam, Quantum tools for symmetric cryptanalysis
Abstract : The security of (symmetric) cryptosystems is based on cryptanalysis: our confidence stems from an extensive and continuous public scrutiny. Meanwhile, some consequences of large-scale quantum computing devices in cryptography are well-known, for example, the quadratic speedup of Grover’s algorithm against exhaustive key search. But the post-quantum security of symmetric cryptography cannot be precisely asserted unless we try to design more specific attacks, as we do in the classical world.
 In this talk, I will give two examples of symmetric designs that can be attacked with specific quantum algorithms. I will first focus on the Even-Mansour construction, which turns a public permutation into a cipher. Even-Mansour has a strong algebraic structure, and, in fact, it is completely broken if the adversary is allowed « superposition queries ». Yet, this structure seems difficult to exploit for an adversary making standard, classical queries, and this has been an open problem for a while. We will see how to improve significantly the quantum key-recovery on Even-Mansour using the « offline Simon’s algorithm »
(joint work with X. Bonnetain, A. Hosoyamada, M. Naya-Plasencia and Y. Sasaki).
Our second target will be multiple-encryption. It is a cipher obtained by composing independent ciphers in sequence, with independent keys. I will shortly introduce quantum merging algorithms (joint work with M. Naya-Plasencia) and their application to the exhaustive key search in multiple-encryption. These algorithms are quite generic and have many other applications in cryptography.


Séminaire C2 distanciel du 29 janvier 2021

Le séminaire C2 reprend en distanciel en ce début d’année 2021. Les informations de connexions seront mises à jour en temps voulu sur le site. Le programme est le suivant :

  • 10h00 -Alain Couvreur, Inria Saclay et Lix Polytechnique : Sur la difficulté du problème d’équivalence de codes en métrique rang.
Abstract : Dans cet exposé, je discuterai du problème d’équivalence de codes en métrique de rang. Pour un code F_{q^m} linéaire, qui est le cas le plus étudié pour les codes en métrique de rang, nous prouvons que le problème peut être résolu en temps polynomial même dans le « pire des cas ». Le problème peut également se poser pour des espaces matriciels généraux. Dans ce dernier cas, je vous présenterai une réduction prouvant  que le problème est au moins aussi dur que l’équivalence de codes en métrique de Hamming.
Il s’agit d’une collaboration avec Thomas Debris-Alazard et Philippe Gaborit
  • 11h00 Olivier Bernard, Thales et Univ Rennes, CNRS, IRISA : Twisted-PHS: Using the Product Formula to Solve Approx-SVP in Ideal Lattices
Abstract : Approx-SVP is a well-known hard problem on lattices, which asks to find short vectors on a given lattice, but its variant restricted to ideal lattices (which correspond to ideals of the ring of integers O_K of a number field K) is still not fully understood. For a long time, the best known algorithm to solve this problem on ideal lattices was the same as for arbitrary lattice. But recently, a series of works tends to show that solving this problem could be easier in ideal lattices than in arbitrary ones, in particular in the quantum setting. Our main contribution is to propose a new “twisted” version of the PHS (by Pellet-Mary, Hanrot and Stehlé 2019) algorithm, that we call Twisted-PHS. As a minor contribution, we also propose several improvements of the PHS algorithm. On the theoretical side, we prove that our Twisted-PHS algorithm performs at least as well as the original PHS algorithm. On the practical side though, we provide a full implementation of our algorithm which suggests that much better approximation factors are achieved, and that the given lattice bases are a lot more orthogonal than the ones used in PHS. This is the first time to our knowledge that this type of algorithm is completely implemented and tested for fields of degrees up to 60.


Lien visio :

Participez à ma réunion depuis votre ordinateur, tablette ou smartphone.

Vous pouvez aussi appeler à l’aide de votre téléphone.
France: +33 170 950 590

Code d’accès: 526-194-573

Rejoignez la réunion depuis une salle ou un système de vidéoconférence.
Composez ou tapez : ou inroomlink.goto.com
ID réunion: 526 194 573
Ou appelez directement: 526194573@ ou

Séminaire C2 du 15 novembre 2019 à l’INRIA Paris

La prochaine édition du séminaire C2 se tiendra le vendredi 15 novembre  2019 en salle Jaques-Louis Lions 2 de l’INRIA Paris, 2 ,rue Simone Iff,  Métro Ligne 6, arrêt Dugommier.

Exceptionnellement, le séminaire ne se déroulera que le matin, la thèse de Xavier Bonnetain se déroulant l’après-midi également à 14h00 :  Xavier Bonnetain, Inria Paris, Soutenance de thèse : Hidden Structures and Quantum Cryptanalysis

  • 10h00 -Geoffroy Couteau, IRIF, Université Paris-Diderot : Efficient Pseudorandom Correlation Generators: Silent OT Extension and More
Abstract : I will present recent developments in secure computation in the preprocessing model. In this model, a protocol is divided in two phases: a preprocessing phase, independent of the inputs, during which long correlated strings are generated and distributed among the parties; and a lightweight online phase, were the actual computation takes place. The generation of the preprocessing material forms the main efficiency bottleneck of all modern secure computation protocols, due to the very large communication, computation, and storage involved.
In this talk, I will discuss how a new cryptographic primitive, a pseudorandom correlation generator (PCG), can be used to considerably improve this state of affair. Informally, a PCG allows to stretch short, correlated keys into long correlated pseudorandom string of near-arbitrary length. This allows the preprocessing phase of all secure computation protocols to be executed with a tiny amound of communication (to generate the short correlated keys) followed only by local operations (to stretch correlated strings from these keys). I will discuss how to formalize this primitive, and describe a way to construct PCGs for an important class of correlations from the hardness of syndrome decoding. In turns, this results in considerable improvements for state-of-the-art secure computation, both from an asymptotic point of view and in practice.
  • 11h15 – Pierre Karpman, Université Grenoble Alpes : High-order private multiplication in characteristic two revisited
Abstract : We revisit the high-order masking schemes for private multiplication introduced by Belaïd et al. at EUROCRYPT 2016, and the matrix model for non-interference (NI) security that they develop in their follow-up work of CRYPTO 2017. This leads to two main results.
1) We generalise the theorems of CRYPTO 2017 so as to be able to apply them to masking schemes over any finite field — in particular GF(2) — and to be able to analyse the strong non-interference (SNI) security notion. This leads to an efficient algorithm that allows us to computationally check the (S)NI security of concrete multiplication gadgets over GF(2) up to order 11.
2) We propose new SNI and NI multiplication gadgets over GF(2) (and any extension thereof) up to order 9 and 11 that moderately improve the randomness complexity of the schemes of Barthe et al. (EUROCRYPT 2017).


Séminaire CCA du vendredi 21 juin en salle 105 du LIP6

La prochaine édition du séminaire C2 se tiendra le vendredi 21 juin  2019 dans la salle 105 couloir 25-26 du LIP6 de Sorbonne Université, métro Jussieu.

  • 10h00 – Thomas Debris, INRIA Paris : Wave: a New Family of Trapdoor One-Way PSF Based on Codes.

(joint work with Nicolas Sendrier and Jean-Pierre Tillich)

We present here a new family of trapdoor one-way Preimage Sampleable Functions (PSF) based on codes, the Wave-PSF family. The trapdoor function is one-way under two computational assumptions: the hardness of generic decoding for high weights and the indistinguishability of generalized $\UV$-codes.
Our proof follows the GPV strategy \cite{GPV08}.  By including rejection sampling, we ensure the proper distribution for the trapdoor inverse output. The domain sampling property of our family is ensured by using and proving a variant of the left-over hash  lemma.  We instantiate the new Wave-PSF family with ternary generalized $\UV$-codes to design a « hash-and-sign » signature scheme which achieves {\em existential unforgeability under adaptive chosen message attacks} (EUF-CMA) in the random oracle model. For 128 bits of classical security, signature sizes are in the order of 13 thousand bits, the public key size in the order of 3 megabytes, and the rejection rate is limited to one rejection every 10 to 12 signatures.
  • 11h15 – Adrien Hauteville, INRIA Saclay : Durandal: a rank metric based signature scheme.

Joint work with Nicolas Aragon, Olivier Blazy, Philippe Gaborit and Gilles Zémor

Abstract : These last years, the interest for post-quantum cryptography has strongly increased, especially since the NIST call for post-quantum cryptosystems standardization. Code-based cryptography is a good candidate in this domain.
The Hamming metric is well-known for several years, however rank metric is a newer topic but possesses many advantages in term of keysize or security hypothesis.
This presentation introduces the Rank-based cryptography and the new signature scheme Durandal, which is a variation of the Lyubashevsky approach in Euclidean lattices.
  • 13h45 – Aurore Guillevic, LORIA :  A first step toward an implementation of the Tower Number Field Sieve: selecting polynomials

Joint work with Shashank Singh, IISER Bhopal, India

Abstract : In this work, we provide a first step toward a practical implementation of the Tower Number Field Sieve. This algorithm is expected to compute more efficiently discrete logarithms in finite fields GF(p^n), where p is large prime number (hundreds of bits) and n is a small composite integer (n=6,8,12 for example).
The NFS-like algorithms for discrete logarithm computation are made of four main steps: polynomial selection (two polynomials define two number fields), relation collection (collecting a large set of multiplicative relations of elements from these two number fields), linear algebra (computing the right kernel of a large sparse matrix) and finally, computing the discrete logarithm of a given target in GF(p^n) w.r.t. a generator.
We generalize to the new Tower setting the ranking formula used in the first step of NFS that selects the best parameters (polynomials) among a large set. For this we re-define the $\alpha$ function and write an exact implementation (Magma and SageMath). This function measures the bias in the smoothness probability of norms in number fields compared to random integers of the same size.
We use it to estimate the yield of polynomials, that is the expected number of relations, as a generalization of Murphy’s $E$ function, and finally the total amount of operations needed to compute a discrete logarithm with the Tower-NFS in the targeted fields.
As another application, this can be used to refine the earlier work of Barbulescu and Duquesne on estimating the running-time of the algorithm. We apply our estimates to some pairing-friendly curves, whose related pairing target field is GF(p^n), for some small composite n. For each curve, we generate curve parameters for p of increasing size every 32 bits, and p^n from 3000 bits to 12000 bits, in order to have more intuition on how does the Tower-NFS algorithm scale for increasing size of p.
  • 15h00 : Leonardo Colo, Université d’Aix-Marseille : Orienting supersingular isogeny graphs
Abstract : Supersingular isogeny graphs have been used in the Charles–Goren–Lauter cryptographic hash function~\cite{CGL2006} and the supersingular isogeny Diffie–Hellman (SIDH) protocole of De\,Feo and Jao~\cite{JDF2011}. A recently proposed alternative to SIDH is the commutative supersingular isogeny Diffie–Hellman (CSIDH) protocole, in which the isogeny graph is first restricted to $\FF_p$-rational curves $E$ and $\FF_p$-rational isogenies then oriented by the quadratic subring $\ZZ[\pi] \subset \End(E)$ generated by the Frobenius endomorphism $\pi: E \rightarrow E$.
We introduce a general notion of orienting supersingular elliptic curves and their isogenies, and use this as the basis to construct a general oriented supersingular isogeny Diffie-Hellman (OSIDH) protocole.
By imposing the data of an orientation by an imaginary quadratic ring $\OO$, we obtain an augmented category of supersingular curves on which the class group $\Cl(\OO)$ acts faithfully and transitively. This idea is already implicit in the CSIDH protocol, in which supersingular curves over $\FF_p$ are oriented by the Frobenius subring $\ZZ[\pi] \simeq \ZZ[\sqrt{-p}]$. In contrast we consider an elliptic curve $E_0$ oriented by a CM order $\OO_K$ of class number one. To obtain a nontrivial group action, we consider $\ell$-isogeny chains, on which the class group of an order $\OO$ of large index $\ell^n$ in $\OO_K$ acts, a structure we call a whirlpool. The map from $\ell$-isogeny chains to its terminus forgets the structure of the orientation, and the original base curve $E_0$, giving rise to a generic supersingular elliptic curve. Within this general framework we define a new oriented supersingular isogeny Diffie-Hellman (OSIDH) protocol, which has fewer restrictions on the proportion of supersingular curves covered and on the torsion group structure of the underlying curves. Moreover, the group action can be carried out effectively solely on the sequences of moduli points (such as $j$-invariants) on a modular curve, thereby avoiding expensive isogeny computations, and is further amenable to speedup by precomputations of endomorphisms on the base curve $E_0$.

Séminaire C2 du 29 mars 2019 en salle Henri Lebesgue à l’IRMAR à Rennes

Edition du vendredi 29 mars 2019 en salle Henri Lebesgue au rez de chaussée de  l’IRMAR de Rennes, bâtiments 22-23, campus de beaulieu. 

  • 10h15 – Illaria Chilloti, KU Leuven  : Improved Bootstrapping for Approximate Homomorphic Encryption
Résumé : Since Cheon et al. introduced a homomorphic encryption scheme for approximate arithmetic (Asiacrypt ’17), it has been recognized as suitable for important real-life usecases of homomorphic encryption, including training of machine learning models over encrypted data. A follow up work by Cheon et al. (Eurocrypt ’18) described an approximate bootstrapping procedure for the scheme.
In this work, we improve upon the previous bootstrapping result. We improve the amortized bootstrapping time per plaintext slot by two orders of magnitude, from ∼ 1 second to ∼ 0.01 second. To achieve this result, we adopt a
smart level-collapsing technique for evaluating DFT-like linear transforms on a ciphertext. Also, we replace the Taylor approximation of the sine function with a more accurate and numerically stable Chebyshev approximation, and design a modified version of the Paterson-Stockmeyer algorithm for fast evaluation of Chebyshev polynomials over encrypted data.

(joint work with Hao Chen and Yongsoo Song)

  • 11h30 : Ayoub Otmani, LITIS, Université de Rouen : Permutation Code Equivalence is Not Harder Than Graph Isomorphism When Hulls are Trivial
Résumé :  This talk deals with the problem of deciding if two finite-dimensional linear subspaces over an arbitrary field are identical up to a permutation of the coordinates. This problem is referred to as the permutation code equivalence.
I will explain that given access to a subroutine that decides if two weighted undirected graphs are isomorphic, one may deterministically decide the permutation code equivalence provided that the underlying vector spaces intersect trivially with their orthogonal complement with respect to an arbitrary inner product. Such class of vector spaces are usually called linear codes with trivial hulls. The reduction is efficient because it essentially boils down to computing the inverse of a square matrix of order the length of the involved codes. Experimental results obtained with randomly drawn binary codes with trivial hulls show that permutation code equivalence can be decided in a few minutes for lengths up to 50,000.
  • 13h30 :Ida Tucker, ENS Lyon : Practical fully secure unrestricted inner product functional encryption modulo prime p.
Résumé :Functional encryption (FE) is an advanced cryptographic primitive which allows, for a single encrypted message, to finely control how much information on the encrypted data each receiver can recover. To this end many functional secret keys are derived from a master secret key. Each functional secret key allows, for a ciphertext encrypted under the associated public key, to recover a specific function of the underlying plaintext.
However constructions for general FE are far from practical, or rely on  non-standard and ill-understood cryptographic assumptions.
In this talk I will focus on the construction of efficient FE schemes for linear functions (i.e. the inner product functionality), and the framework in which our constructions hold. Such schemes yield many practical applications, and our constructions are the first FE schemes for inner products modulo a prime that are both efficient and recover the result whatever its size I will also describe an instantiation of the framework using class groups of imaginary quadratic fields.
  • 14h45 Brice Minaud, INRIA : Learning to Reconstruct: Statistical Learning Theory and Encrypted Database Attacks
Résumé : Searchable encryption enables a client to encrypt data, and outsource its storage to an untrusted server, while retaining the ability to issue search queries over the outsourced data. For efficiency reasons, all practical constructions in this area allow for the host server to learn a limited amount of information on the encrypted data as it processes queries, expressed in the security model by a leakage function. In this talk, I will give a short introduction to searchable encryption, and focus on the implications of very general (and seemingly innocuous) leakage functions. We will see that the problem of reconstructing the contents of the database from leakage information is closely related to statistical learning theory. Using this new viewpoint, I will present attacks that approximately reconstruct the contents of an entire database using only the access pattern leakage of range queries, a minimal type of leakage present in all practical constructions today.

Joint work with Paul Grubbs, Marie-Sarah Lacharité, and Kenny Paterson, to appear at S&P 2019.

Séminaire C2 le vendredi 25 janvier 2019 dans la salle 105 du LIP6

La prochaine édition du séminaire C2 se tiendra le vendredi 25 janvier  2019 dans la salle 105 couloir 25-26 du LIP6 de Sorbonne Université, métro Jussieu.

  • 10h00 – Virginie Lallemand, LORIA : Multiplication Operated Encryption with Trojan Resilience
Résumé : In order to lower costs, the fabrication of Integrated Circuits is increasingly delegated to offshore contract foundries, making themexposed to malicious modifications, known as hardware Trojans. Recent works have demonstrated that a strong form of Trojan-resilience can be obtained from untrusted chips by exploiting secret sharing and Multi-Party Computation (MPC), yet with significant cost overheads. In this talk, we examine the possibility of building a symmetric cipher enabling similar guarantees in a more efficient manner. The solution proposed exploits a simple round structure mixing a modular multiplication and a multiplication with a binary matrix. Besides being motivated as a new block cipher design for Trojan resilience, our research also exposes the cryptographic properties of the modular multiplication, which is of independent interest.

Based on joint work with Olivier Bronchain, Sebastian Faust, Gregor Leander, Léo Perrin and François-Xavier Standaert

  • 11h15 : Antoine Grospellier, INRIA : Constant overhead quantum fault-tolerance with quantum expander codes
Résumé :We prove that quantum expander codes can be combined with quantum fault-tolerance techniques to achieve constant overhead: the ratio between the total number of physical qubits required for a quantum computation with faulty hardware and the number of logical qubits involved in the ideal computation is asymptotically constant, and can even be taken arbitrarily close to 1 in the limit of small physical error rate. This improves on the polylogarithmic overhead promised by the standard threshold theorem. To achieve this, we exploit a framework introduced by Gottesman together with a family of constant rate quantum codes, quantum expander codes. Our main technical contribution is to analyze an efficient decoding algorithm for these codes and prove that it remains robust in the presence of noisy syndrome measurements, a property which is crucial for fault-tolerant circuits. We also establish two additional features of the decoding algorithm that make it attractive for quantum computation: it can be parallelized to run in logarithmic depth, and is single-shot, meaning that it only requires a single round of noisy syndrome measurement.


  • 13h45 : Elise Barelli, Université de Versailles – Saint-Quentin, Équipe CRYPTO, Laboratoire LMV : Étude de clés compactes pour le schéma de McEliece utilisant des codes géométriques avec des automorphismes non triviaux.
Résumé : En 1978, McEliece introduit un système de chiffrement basé sur l’utilisation des codes linéaires et propose d’utiliser les codes de Goppa classiques, ie: des sous-codes sur un sous-corps de codes algébriques (AG codes) construit sur une courbe de genre 0. Cette proposition reste sécurisé et dans le but d’introduire une généralisation de ces codes, en 1996, H. Janwa et O. Moreno proposent d’utiliser des sous-codes sur un sous corps de codes construits à partir de courbes de genre > 0 , on les appelle les SSAG codes (Subfield Subcode of AG codes). Cette proposition donne un plus grand choix de code puisqu’on peut faire varier la courbe, le genre, et les points rationnels du diviseur qui génère le code. Le principal obstacle à l’utilisation de ces codes en cryptographie reste le taille de la clé publique comparé aux autres systèmes à clé publique. Pour contourner cette limitation, on réduit la taille des clés en utilisant des codes qui admettent une matrice génératrice compacte. Un moyen d’obtenir des matrices compactes est de choisir des codes avec un groupe d’automorphismes non-trivial, par exemple on utilise des SSAG codes quasi-cycliques.


  • 15h00 : Julien Lavauzelle, IRMAR, Université de Rennes 1 : Constructions de protocoles de PIR
Résumé : Un protocole de récupération confidentielle d’information (private information retrieval, PIR) permet d’extraire l’entrée D_i d’une base de données D, sans révéler d’information sur i à l’entité qui détient D.
Dans cet exposé, nous donnerons un aperçu de techniques existantes pour la construction de tels protocoles. Nous préciserons notamment des paramètres qui quantifient leur efficacité, tels que la complexité algorithmique des serveurs, la complexité de communication ou la redondance de stockage.
Nous présenterons enfin deux constructions de protocoles de PIR. La première, fondée sur des objets combinatoires appelés designs transversaux, atteint une complexité calculatoire optimale pour les serveurs qui détiennent la base de données. La seconde se concentre sur la complexité de communication, et se place dans le contexte de fichiers encodés à l’aide de codes régénérants optimaux.