Séminaire C2 du vendredi 27 janvier à Lyon

La prochaine édition du séminaire se tiendra le vendredi 27 janvier à ENS de Lyon, site Descartes, bâtiment D8 Buisson, salle D01.
L’entrée du bâtiment D8 Buisson se situe au 19 allée de Fontenay, 69007 Lyon, à 3 minutes de la station Debourg du métro B (ligne directe depuis la gare de Part-Dieu – le métro se situe à l’extérieur de la gare, côté centre-ville (il faut sortir du côté de la voie A). Un accueil se situe derrière la porte et l’entrée est visible depuis la salle D01 qui se situe juste en face.

    • 10h45 : Johanna Loyer, INRIA Paris : Classical and quantum 3- and 4-sieves to solve SVP with low memory

Abstract: The Shortest Vector Problem (SVP) is at the foundation of lattice-based cryptography. The fastest known method to solve SVP in dimension d is by lattice sieving, which runs in time 2^{td+o(d)} with 2^{md+o(d)} memory for absolute constants t,m. Searching reduced vectors in the sieve is a problem reduced to the configuration problem, i.e. searching k vectors satisfying given constraints over their scalar products. In this work, we present a framework for k-sieve algorithms: we filter the input list of lattice vectors using a code structure modified from [BDGL16] to get lists centered around k codewords summing to the null-vector. Then, we solve a simpler instance of the configuration problem in the k filtered lists. Based on this framework, we describe classical sieves for k=3 and 4 that introduce new time-memory trade-offs. We also use the k-Lists algorithm [KPMP19] inside our framework, and this improves the time for k=3 and gives new trade-offs for k=4.

    • 12h : Victor Dyseryn, XLIM,Université de Limoges – On the security of the PSSI problem and the Durandal signature scheme (joint work with Nicolas Aragon and Philippe Gaborit)

Abstract: The Product Spaces Subspaces Indistinguishability (PSSI) was presented at EUROCRYPT 2019 and is one of the pillars of the security of the Durandal signature scheme, which is an efficient rank-metric signature scheme introduced in the same 2019 paper. In this work we present a new attack against the PSSI problem. It combines signatures two by two to recover subspaces containing products of a secret scalar with a public scalar and then uses a chain of intersections to recover the secret key. This new attack is efficient against a wide range of parameters. In the case of EUROCRYPT 2019 Durandal parameters, it would allow the recovery of the secret key in about 2^36 attempts (plus costs of linear algebra), using only a few dozens of signatures. We then investigate how to mitigate this new attack and propose new secure parameters for the Durandal signature scheme.

    • 14h30 : Bastien Pacifico, I2M, Aix-Marseille Université. Algorithmes de type Chudnovsky sur la droite projective pour la multiplication dans les corps finis.

Résumé : Il existe différents modèles permettant de mesurer la complexité d’un algorithme de multiplication dans une extension finie d’un corps fini.La complexité bilinéaire d’un tel algorithme est le nombre de multiplications bilinéaires, dépendant des deux éléments que l’on souhaite multiplier, qu’il exécute.La méthode de D.V. et G.V. Chudnovsky (1988) permet de construire des algorithmes ayant une bonne complexité bilinéaire. Ce sont des algorithmes d’évaluation/interpolation utilisant les points rationnels de courbes algébriques, i.e. les places rationnelles d’un corps de fonctions. Cette méthode a depuis été généralisée de différentes manières, et notamment à l’évaluation en des places de degrés arbitraires.Dans cet exposé, nous proposons une construction générique récursive d’algorithmes de type Chudnovsky sur le corps des fonctions rationnelles, évaluant sur des places de degrés croissants. Notre construction permet d’obtenir des algorithmes d’interpolation polynômiale constructibles en temps polynômial et ayant une complexité bilinéaire intéressante, pour un degré d’extension quelconque.D’autres part, nous verrons qu’il existe un profond lien entre ces algorithmes et les codes correcteurs d’erreurs géométriques.

Prépublication : https://arxiv.org/abs/2007.16082

    • 15h45 : Benjamin Wesolowski, CNRS & ENS de Lyon- Hard problems for isogeny-based cryptography

Abstract: Isogeny-based cryptography is one of the few branches of public-key cryptography
that promises to resist quantum attacks. The security of these cryptosystems relates to the
(presumed) hardness of a variety of computational problems: finding paths in large « isogeny
graphs », computing endomorphisms of elliptic curves, or inverting group actions. We present
these problems, analyse their hardness, and discuss their applications to post-quantum
cryptography.

Seminaire C2 du vendredi 7 octobre 2022

La prochaine édition du séminaire se tiendra le vendredi 7 octobre en salle Jacques-Louis Lions à l’INRIA Paris.

  • 10:00. Thomas Decru, imec-COSIC, KU Leuven, Belgium.

    An efficient key recovery attack on SIDH.

    Abstract: SIDH is an isogeny-based public key exchange protocol from 2011 conjectured to be resistant against attacks from both classical and quantum computers. SIKE, which is a key encapsulation based on SIDH, advanced to the 4th round of NIST’s post-quantum standardization process as alternative candidate for becoming a new standard for public key exchanges.

    In this talk we will discuss how the information from the SIDH protocol can be turned into an isogeny between scarce types of abelian surfaces, a so called “glue-and-split” isogeny in dimension two which stems from a theorem by Kani from 1997. This allows us to break the decisional version of the SIDH protocol, which in turn breaks the computational version. The attack requires certain assumptions, but is very practical and can break all the proposed NIST security levels of SIKE within a day on a regular desktop computer with one core.

    Joint work with Wouter Castryck.

  • 11:10. Alexandre Wallet, INRIA Rennes.

    Mitaka – a simpler, parallelizable, maskable variant of Falcon.

    Résumé : Falcon est un schéma de signature fondé sur les réseaux euclidiens, et récemment sélectionné par le NIST comme futur standard post-quantique. En consommation de bande passante, Falcon est actuellement la meilleure alternative, et ses performances sont proches de celle de l’autre schéma sélectionné, Dillithium. Cependant, les choix de design de Falcon imposent des limites conceptuelles freinant son utilisation dans différents scénarios : 1) les jeux de paramètres sont très restreints ; 2) implémenter le schéma est un exercice délicat ; 3) protéger son implémentation actuelle a un coût prohibitif en plus d’être difficile.

    Les facteurs limitants du schéma sont principalement liés à son échantillonneur de vecteurs Gaussiens dans des réseaux euclidiens. Notre variante Mitaka, qui suit le design général de Falcon, évite les écueils précédents en utilisant un échantillonneur différent, appelé « hybride », et qui avait été mis de côté en raison d’un niveau de sécurité estimé plus faible. De manière générale, nos travaux ont montré que la sécurité apportée par l’hybride pouvait être équivalente à celle de Falcon, et que sa simplicité conceptuelle permet de plus grands choix de design et de protection, à faible coût. L’exposé présentera des techniques et outils qui nous ont permis d’obtenir ces résultats pour Mitaka, et mentionnera d’autres récentes améliorations générales pour hash-then-sign sur des réseaux.

    L’exposé sera inspiré de plusieurs travaux en collaboration avec Thomas Espitau, Pierre-Alain Fouque, François Gérard, Melissa Rossi, Akira Takahashi, Mehdi Tibouchi et Yang Yu.

  • 13:45. Malika Izabachène, Cosmian, Paris, France. 

    Plug-and-play sanitization for TFHE.

    Résumé : Le chiffrement complètement homomorphe (FHE) permet d’évaluer une fonction arbitraire sur des messages chiffrés tout en préservant la confidentialité des messages. Cette propriété trouve de multiples applications, notamment lorsque l’on souhaite stocker des données sur le cloud tout en laissant la possibilité d’effectuer des calculs sur ces donnéesDans cet exposé, nous nous intéresserons à la confidentialité de l’algorithme opéré par le service du cloud.
    Une procédure d’assainissement (ou sanitization en anglais) d’un chiffré FHE garantit que toute l’information contenue dans ce chiffré, excepté le message associé, est détruite. En particulier, il est impossible de déterminer si des opérations, et lesquelles, ont été effectuées pour obtenir ce chiffré, même en connaissant la clé secrète.
    Nous verrons comment construire, à partir des bootstrappings de FHEW (Eurocrypt 2015) et TFHE (Asiacrypt 2016), une procédure qui permet d’apporter ces garanties de sécurité.

    Travail effectué en collaboration avec Florian Bourse.

  • 14:45. Loïc Rouquette, INSA Lyon, LIRIS, CITI.

    Recherche de Distingueurs Boomerangs à l’aide de la Programmation par Contraintes

    Résumé : La cryptanalyse différentielle est une des techniques d’attaque les plus efficaces en cryptographie symétrique, elle repose sur l’exploitation de certaines propriétés statistiques des chiffrements. Plus précisément, il s’agit d’observer, pour un chiffrement E, la probabilité d’obtenir une différence en sortie δout = EK(M) ⊕ EK(Mδin) lorsqu’une différence δin est injectée en entrée. En 1999, David Wagner (FSE99) met au point une variante de l’attaque différentielle, plus efficace que cette dernière dans certains cas, qui consiste à décomposer la fonction de chiffrement en deux sous-fonctions : EK = EK0 ∘ EK1.

    Afin de s’assurer que les différences, propagées dans les deux sous-fonctions EK0 et EK1, soient compatibles, Cid et co-auteurs (EuroCrypt18) ont introduit la notion de Boomerang Connectivity Table (BCT). Leur technique a ensuite été étendue et d’autres tables ont été introduites ; premièrement les tables UBCT et LBCT par Wang et Peyrin (ToSC19), puis la table EBCT par Delaune et co-auteurs (ToSC20).

    La complexité croissante des modèles découlant de ces techniques nécessite de mettre en oeuvre des techniques de résolution efficaces. Récemment les approches déclaratives (SAT, CP et (M)ILP) ont été appliquées avec succès à des problèmes de cryptanalyse différentielle.

    À l’aide de solveurs SAT et CP, nous proposons d’appliquer l’approche de résolution en deux étapes, qui commence par une analyse tronquée du chiffrement avant de passer à la version binaire, afin de calculer des caractéristiques différentielles boomerang sur Rijndael et Warp à partir des modèles de Delaune et co-auteurs (ToSC20) et de Gérault et co-auteurs (CP16). Dans un premier temps, nous montrons comment adapter le modèle de Delaune et co-auteurs (ToSC20) de Skinny à Rijndael et nous montrons également comment prendre en compte des différences dans la clé. Dans un second temps, nous montrons comment adapter la modélisation de Delaune et co-auteurs (ToSC20) aux schémas de Feistel et nous donnons nos résultats sur les chiffrements Warp, Twine et LBlock-S (ToSC22).

    Travail en commun avec Virgine Lallemand,  Marine Minier, Christine Solon

Séminaire C2 du vendredi 10 juin à Rennes

Exposés au séminaire C2 du vendredi 10 juin 2022

 

  •  Clara Pernot, INRIA Paris, New Representations of the AES Key Schedule

Résumé : In this talk we present a new representation of the AES key schedule, with some implications to the security of AES-based schemes. In particular, we show that the AES-128 key schedule can be split into four independent parallel computations operating on 32 bits chunks, up to linear transformation. Surprisingly, this property has not been described in the literature after more than 20 years of analysis of AES. We show two consequences of our new representation, improving previous cryptanalysis results of AES-based schemes. First, we observe that iterating an odd number of key schedule rounds results in a function with short cycles. This explains an observation of Khairallah on mixFeed, a second-round candidate in the NIST lightweight competition. Our analysis actually shows that his forgery attack on mixFeed succeeds with probability 0.44 (with data complexity 220GB), breaking the scheme in practice. The same observation also leads to a novel attack on ALE, another AES-based AEAD scheme.

Our new representation also gives efficient ways to combine information from the first subkeys and information from the last subkeys, in order to reconstruct the corresponding master keys. In particular we improve previous impossible differential attacks against AES-128.

This is a joint work with Gaëtan Leurent.

  • Thibauld Feneuil, CryptoExperts et Sorbonne Université –   Syndrome Decoding in the Head – Shorter Signatures from Zero-Knowledge proofs

Résumé : In this talk, I will present a new zero-knowledge proof of knowledge for the syndrome decoding (SD) problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. Specifically, we propose a 5-round zero-knowledge protocol that proves the knowledge of a vector x such that y=Hx and wt(x)<= w and which achieves a soundness error closed to 1/N for an arbitrary N.

While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for 128-bit security when relying on the hardness of the SD problem on binary fields. Using larger fields (like F_{256}), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and to be competitive with SPHINCS+, both post-quantum signature candidates in the ongoing NIST standardization effort. Since the security relies on the hardness of the syndrome decoding problem for random linear codes which is known to be NP-hard and for which the cryptanalysis state of the art has been stable for many years, it results in a conservative signature scheme. Moreover, our scheme outperforms all the existing code-based signature schemes for the common « signature size + public key size » metric.

Joint work with Antoine Joux and Matthieu Rivain.

  • Fangan Yssouf Dosso, Ecole des Mines de Saint-Etienne, Département SAS – PMNS for efficient arithmetic and small memory cost

Résumé : The Polynomial Modular Number System (PMNS) is an integer number system which aims to speed up arithmetic operations modulo a prime p. Such a system is defined by a tuple (p, n, g, r, E), where p, n, g and r are positive integers, E is a monic polynomial with integer coefficients, having g as a root modulo p. Most of the work done on PMNS focus on polynomials E such that E(X) = X^n – l, where l is a nonzero integer, because such a polynomial provides efficiency and low memory cost, in particular for l = 2 or -2.

It however appeared that these are not always the best choices. In this presentation, we first start with the necessary background on PMNS. Then, we highlight a new set of polynomials E for very efficient operations in the PMNS and low memory requirement, along with new bounds and parameters. We show that these polynomials are more interesting than (most) polynomials E(X)=X^n – l. To finish, we see how to use PMNS to randomise arithmetic operations in order to randomise high level operations like elliptic curve scalar multiplication, to protect implementations against some advanced side channel attacks like differential power analysis (DPA).

Joint work with Jean-Marc Robert, Nadia EL MRABET,  Laurent-Stéphane DIDIER et Pascal Véron

  • Ivan Pogildiakov,  IRMAR, Université Rennes 1 –  Binary codes, hyperelliptic curves, and the Serre bound

Résumé : Algebraic curves over finite fields have found nice applications in such areas as Coding theory and Cryptography. In that context, an object of this sort is meaningful only if its number of rational points is as close as possible to the celebrated Serre bound. However, the question regarding the existence of an algebraic curve having a prescribed number of rational points is of theoretical interest in its own right. There are different types of results in this direction in the literature such as, for example, providing explicit or non-explicit constructions in the case when either the definition field, or the genus, or the number of rational points varies.

In this talk, we propose a new approach for constructions of hyperelliptic curves over a fixed finite field of odd characteristic. As a main tool, a specific binary linear code defined by the field is used. A nice feature of the approach is that it allows one to translate the existence and the search problems of a hyperelliptic curve having desired parameters to that of a codeword of corresponding Hamming weight in a coset of the linear code. In some cases, by means of the machinery of the general theory of linear codes, we manage to find curves (having a fixed number of rational points) whose genera are reasonably close to the Serre bound.

For the sake of simplicity, the case of prime fields is mainly discussed. If time permits, as a by-product result, we will obtain new examples of curves with many points that improve upon some known lower bounds.

Séminaire C2 du vendredi 17 décembre à Rennes – Reporté cause COVID

Le prochain séminaire C2 se tiendra en présentiel le vendredi 17 décembre 2021 à l’université de Rennes 1 dans l’amphithéâtre Métivier de l’IRISA sur le campus de Beaulieu

Lien visio à venir

Le programme est le suivant

  • 9h30, Aurore Guillevic, Inria Nancy and Aarhus University – Elliptic curves for z-SNARKs

Joint work with Youssef El Housni, Consensys, Inria Saclay, ÉcolePolytechnique, and Diego Aranha, Aarhus University

Résumé : SNARK stands for Succinct Non-interactive ARgument of Knowledge and is a powerful tool in blockchains and cryptocurrencies. The most efficient at the moment is Groth’16 preprocessing SNARK, implemented with a pairing-friendly elliptic curve with a subgroup of prime order q of 255 bits (the BLS12-381 curve). It provides a zero-knowledge proof of an arithmetic circuit that performs arithmetic operations modulo q.

For a recursive SNARK, one applies again the strategy of arithmetisation and proof: a compiler produces an arithmetic circuit that performs some cryptographic computation (in Groth’16: computing pairings and checking an equation). This second circuit deals with arithmetic operations not modulo q but modulo p, the field of definition of the first curve. A second pairing-friendly elliptic curve is required, whose order should be this prime p: this is a chain of elliptic curves.

To achieve a recursive SNARK, and recursively prove proofs (a.k.a. bootstrapping proofs), in 2014, Ben-Sasson, Chiesa, Tromer, and Virza introduced a cycle of pairing-friendly curves: an MNT curve of prime
order q defined over a field GF(p), and a second MNT curve of prime order p defined over the prime field GF(q). That is, the field of definition of each curve is the order of the other curve. However, generating such curves takes a large computational effort: [BCTV14] spanned all curve discriminants up to 10^15 in 610 000 core-hours on a cluster to obtain two cycles: one of 298 bits, and a second one of 753 bits. Moreover, the pairing takes its values in a quartic extension for the first curve (1192 and 3012 bits respectively), which does not provide 128 bits of security anymore because of the TNFS algorithm computing discrete logarithms in extension fields (De Micheli et al. at Asiacrypt’21 present the first implementation). While a cycle of pairing-friendly curves is optimal for a recursive SNARK construction, it is very slow because of the many constraints on the curves. To date, no other cycle of pairing-friendly curves is known.

To circumvent the slowness of such cycles, one can instead use a chain of curves: the second (outer) curve has a subgroup of prime order being the field of definition of the first (inner) curve. Such a construction is much easier, and more pairing-friendly curves are possible. We generalise these constructions to families of SNARK-friendly 2-chains of pairing-friendly elliptic curves and compare their efficiency.

Finally, other cycles, half-cycles or chains of curves arise in variants of zero-knowledge proofs: cycles of non-pairing friendly curves such as Pallas-Vesta, half-pairing-friendly cycles such as Pluto-Eris. If time allows it, an overview of these alternative curves will be given.

  • 10h45, Alexandre Wallet, Univ Rennes, CNRS, IRISA –   Vers le déploiement des schémas « Hash-and-sign » basés sur les réseaux Euclidiens

Résumé : La compétition de standardisation post-quantique du NIST arrive bientôt à sa fin. Parmi les trois finalistes principaux retenus pour les signatures numériques, on retrouve la proposition Falcon. Ce schéma correspond à une instanciation efficace du paradigme « GPV » (Gentry-Peikert-Vaikunthanatan), permettant de proposer des schémas de signatures numériques reposant sur la difficulté de problèmes de réseaux euclidiens structurés. En terme de vitesse et particulièrement de bande passante, Falcon semble surclasser ses concurrents directs. Cependant, le schéma souffre de limites conceptuelles pouvant freiner son utilisation dans différents scénarios : 1) les jeux de paramètres sont très restreints ; 2) implémenter le schéma est un exercice délicat ; 3) protéger l’implémentation, au moins vis-à-vis des « timing attacks »,
impacte sérieusement les performances du schéma (en plus d’être, là encore, techniquement exigeant).Dans cet exposé, je décrirai plus en détails les briques limitantes du schéma, principalement liées à l’échantillonnage de vecteurs Gaussiens dans des réseaux euclidiens. En effet, dans le cas de Falcon, c’est cette étape algorithmique qui décide de la sécurité théorique et concrète du schéma, mais c’est aussi une source de fuites du point de vue des attaques par canaux auxiliaires. Je présenterai ensuite Mitaka, une proposition alternative suivant le design de Falcon, permettant de s’affranchir des écueils précédents. Ainsi, notre schéma Mitaka est 1) conceptuellement plus simple, et instanciable via de grandes familles de paramètres ; 2) sensiblement plus rapide pour une sécurité et une consommation de bande passante équivalente ; 3) peut être masqué à un ordre arbitraire pour un coût modeste et avec des techniques standards. Enfin, il est possible d’implémenter Mitaka sans avoir recourt à de l’arithmétique en virgule flottante.

L’exposé sera inspiré de plusieurs travaux en collaboration avec T. Espitau, P.A. Fouque, F. Gérard, P. Kirchner, M. Rossi, A. Takahashi, M. Tibouchi et Y. Yu.

  • 14h00 : Clara Pernot, INRIA Paris, New Representations of the AES Key Schedule

Résumé : In this talk we present a new representation of the AES key schedule, with some implications to the security of AES-based schemes. In particular, we show that the AES-128 key schedule can be split into four independent parallel computations operating on 32 bits chunks, up to linear transformation. Surprisingly, this property has not been described in the literature after more than 20 years of analysis of AES. We show two consequences of our new representation, improving previous cryptanalysis results of AES-based schemes. First, we observe that iterating an odd number of key schedule rounds results in a function with short cycles. This explains an observation of Khairallah on mixFeed, a second-round candidate in the NIST lightweight competition. Our analysis actually shows that his forgery attack on mixFeed succeeds with probability 0.44 (with data complexity 220GB), breaking the scheme in practice. The same observation also leads to a novel attack on ALE, another AES-based AEAD scheme.

Our new representation also gives efficient ways to combine information from the first subkeys and information from the last subkeys, in order to reconstruct the corresponding master keys. In particular we improve previous impossible differential attacks against AES-128.This is a joint work with Gaëtan Leurent.

  • 15h15 : Victor Mollimard,  Univ Rennes, CNRS, IRISA – Efficient Methods to Search for Best Differential Characteristics on SKINNY

Résumé : Evaluating resistance of ciphers against differential cryptanalysis is essential to define the number of rounds of new designs and to mount attacks derived from differential cryptanalysis. In this paper, we propose automatic tools to find the best differential characteristics on the SKINNY block cipher. As usually done in the literature, we split this search in two stages denoted by Step 1 and Step 2. In Step 1, we aim at finding all truncated differential characteristics with a low enough number of active Sboxes. Then, in Step 2, we try to instantiate each difference value while maximizing the overall differential characteristic probability. We solve Step 1 using an ad-hoc method inspired from the work of Fouque et al. whereas Step 2 is modelized for the Choco-solver library as it seems to outperform all previous methods on this stage. Notably, for SKINNY-128 in the SK model and for 13 rounds, we retrieve the results of Abdelkhalek et al. within a few seconds (to compare with 16 days) and we provide, for the first time, the best differential related-tweakey characteristics up to 14 rounds for the TK1 model. Regarding the TK2 and the TK3 models, we were not able to test all the solutions Step 1, and thus the differential characteristics we found up to 16 and 17 rounds are not necessarily optimal.

Séminaire C2 du vendredi 15 octobre 2021

Le prochain séminaire C2 se tiendra en présentiel le vendredi 15 octobre 2021 à l’INRIA Paris, 2 rue Simone Iff, Paris 12.

Le lien pour joindre la réunion en visio est :
https://univ-rennes1-fr.zoom.us/j/7058629792

Le programme est le suivant

  • 10h00, Katharina Boudgoust, IRISA, Université Rennes 1, Hardness of Module Learning With Errors With Small Secrets

Résumé: This talk focuses on the module variant of the Learning With Errors problem (Module-LWE), a fundamental computational problem used in lattice-based cryptography nowadays. To highlight its importance for practical schemes, note that several finalists of the NIST standardization project rely on the hardness of Module-LWE, e.g., the signature scheme Dilithium and the key encapsulation mechanism Kyber.

More concretely, in this talk we consider Module-LWE, where the underlying secret is of small norm. Whereas several works prove the hardness of unstructured LWE with short secrets, only few was known for the module case. In this presentation I summarize our results that previously appeared at Asiacrypt 2020 and CT-RSA 2021, where we investigate the hardness of Module-LWE when the secret has coefficients in the set {0,1}, also called binary Module-LWE. Furthermore, I show that both results naturally generalize to secrets with bounded coefficients. The talk concludes with some interesting open questions related to the hardness of Module-LWE with non-uniform secrets.

This is a joint work with Corentin Jeuy, Adeline Roux-Langlois and Weiqiang Wen.

  • 11h15, Magali Bardet, LITIS, Université de Rouen Normandie, Algebraic cryptanalysis of problems in code-based cryptography and applications

Abstract: Rank-metric code-based cryptography relies on the hardness of decoding a random linear code in the rank metric. This fundamental problem is called the Minrank problem, and is ubiquitous in rank metric (or even Hamming metric) code based cryptography as well as in multivariate cryptography. For structured instances arising in the former, their security rely on a more specific problem, namely the Rank Syndrome Decoding problem. There is also a generalization called the Rank Support Learning problem, where the attacker has access to several syndromes corresponding to errors with the same support. Those problems have various applications in code-based and multivariate cryptography (KEM and signature schemes), and a precise understanding of the complexity of solving them can help designers to create secure parameters.

In this talk, I will present the three problems and their relations to cryptographic schemes, their algebraic modeling and the recent improvements in the understanding of the complexity of solving those systems using algebraic techniques like Gröbner bases computations.

Joint works with Pierre Briaud, Maxime Bros, Daniel Cabarcas, Philippe Gaborit, Vincent Neiger, Ray Perlner, Olivier Ruatta, Daniel Smith-Tone, Jean-Pierre Tillich and Javier Verbel.

  • 13h45 : Antonin Leroux, Grace, LIX et INRIA Saclay–Île-de-France, Séta: Supersingular Encryption from Torsion Attacks

Résumé : We present Séta, a new family of public-key encryption schemes with post-quantum security based on isogenies of supersingular elliptic curves. It is constructed from a new family of trapdoor one-way functions, where the inversion algorithm uses Petit’s so called torsion attacks on SIDH to compute an isogeny between supersingular elliptic curves given an endomorphism of the starting curve and images of torsion points. We prove the OW-CPA security of S´eta and present an IND-CCA
variant using the post-quantum OAEP transformation. Several variants for key generation are explored together with their impact on the selection of parameters, such as the base prime of the scheme. We furthermore formalise an “uber” isogeny assumption framework which aims to generalize computational isogeny problems encountered in schemes including SIDH, CSDIH, OSIDH and ours. Finally, we carefully select parameters to achieve a balance between security and run-times and present experimental results from our implementation

  • 15h00 : Chloé Hébant, Cascade, ENS et INRIA Paris  Traceable Constant-Size Multi-Authority Credentials,

Abstract: Many attribute-based anonymous credential (ABC) schemes have been proposed allowing a user to prove the possession of some attributes, anonymously. They became more and more practical with, for the most recent papers, a constant-size credential to show a subset of attributes issued by a unique credential issuer. However, proving possession of attributes coming from K different credential issuers usually requires K independent credentials to be shown. Only attribute-based credential schemes from aggregatable signatures can overcome this issue.

In this paper, we propose new ABC schemes from aggregatable signatures with randomizable tags. We consider malicious credential issuers, with adaptive corruptions and collusions with malicious users. Whereas our constructions only support selective disclosures of attributes, to remain compact, our approach significantly improves the complexity in both time and memory of the showing of multiple attributes: for the first time, the cost for the prover is (almost) independent of the number of attributes and the number of credential issuers.

Whereas anonymous credentials require privacy of the user, we propose the first schemes allowing traceability.

We formally define an aggregatable signature scheme with (traceable) randomizable tags, which is of independent interest. We build concrete schemes from the recent linearly homomorphic signature scheme of PKC 20. As all the recent ABC schemes, our construction relies on signatures which unforgeability is proven in the bilinear generic group model.

Séminaire C2 du vendredi 4 juin 2021

Programme du séminaire du vendredi 4 juin 2021

    • 10h00 – Jade Nardi, INRIA, Preuves interactives de proximité d’un mot à un code géométrique
Résumé : Ces dernières années, d’énormes progrès ont été accomplis dans le domaine du calcul vérifiable. Celui-ci permet de fournir, en plus d’un résultat, une preuve courte et rapidement vérifiable que le calcul a été correctement effectué. Certaines de ces preuves peuvent également avoir une propriété « zero-knowledge », et sont aujourd’hui largement déployées dans des applications liées aux blockchains.
Parmi les différentes familles de schémas de calcul vérifiable, une famille de constructions repose fondamentalement sur des tests de proximité à des codes algébriques. Les constructions implémentées dans l’industrie utilisent un protocole interactif, appelé FRI (Fast Reed-Solomon Interactive Oracle Proof of Proximity), permettant de vérifier la proximité à des codes de Reed-Solomon en temps logarithmique (en la longueur du code).
Naturellement, on se demande si les codes algébriques, généralisation des codes de Reed-Solomon, admettent des tests de proximité avec des paramètres similaires. Dans cet exposé, on présentera le protocole FRI en détail, en prenant du recul sur les outils mathématiques qui font son efficacité. On décrira comment généraliser ces outils aux codes géométriques (qu’on prendra le temps d’introduire rapidement), quels obstacles apparaissent naturellement à cause de la géométrie des courbes non rationnelles et comment les surmonter en adaptant le protocole. On détaillera le cas de codes sur des extensions de Kummer.

 

    • 11h00 – David Pointcheval, ENS, Evaluation Secrète de Polynômes Vérifiable 
Résumé : L’évaluation secrète de polynômes (Oblivious Polynomial Evaluation) permet à Alice d’obtenir l’évaluation d’un polynôme connu de Bob seul sur une entrée secrète de son choix. Les applications sont nombreuses, telles que l’intersection d’ensembles privés (Private Set Intersection). Et dans ce cas, il est d’ailleurs nécessaire de s’assurer que le même polynôme est utilisé par Bob pour chaque évaluation.
Dans cet exposé, nous montrons comment prouver l’évaluation correcte d’un polynôme secret, mis en gage par Bob, sur une entrée chiffrée d’Alice, à l’aide de chiffrement complètement homomorphe (Fully Homomorphic Encryption). La preuve est compacte, et indépendante du degré du polynôme à évaluer.
Il s’agit d’un travail commun avec Paola de Perthuis, Malika Izabachène et Anca Nitulescu

Séminaire C2 du vendredi 19 mars 2021

Programme du séminaire

  • 10h00 -Mélissa Rossi, ANSSI, Assessing residual security of lattice-based cryptography
Abstract : This talk will present a framework for cryptanalysis of lattice-based schemes, when side information —in the form of «hints»— about the secret is available. This presentation outlines a joint work with Dana Dachman-Soled, Léo Ducas and Huijing Gong that was presented in CRYPTO 2020 (https://eprint.iacr.org/2020/292).
This framework generalizes the primal lattice reduction attack, and allows the progressive integration of hints before running a final lattice reduction step. The techniques for integrating hints include sparsifying the lattice, projecting onto and intersecting with hyperplanes, and/or altering the distribution of the secret vector. The main contribution is to propose a toolbox and a methodology to integrate such hints into lattice reduction attacks and to predict the performance of those lattice attacks with side information.
While initially designed for side-channel information, this framework can also be used in other cases. For example, one can simply exploit constraints imposed by certain schemes (LAC, Round5, NTRU). Besides, I will present a way to use this framework combined with decryption failures information using a joint work with Jan-Pieter D’Anvers and Fernando Virdia presented in EUROCRYPT 2020 (https://eprint.iacr.org/2019/1399).

 

  • 11h00André Schrottenloher, CWI Amsterdam, Quantum tools for symmetric cryptanalysis
Abstract : The security of (symmetric) cryptosystems is based on cryptanalysis: our confidence stems from an extensive and continuous public scrutiny. Meanwhile, some consequences of large-scale quantum computing devices in cryptography are well-known, for example, the quadratic speedup of Grover’s algorithm against exhaustive key search. But the post-quantum security of symmetric cryptography cannot be precisely asserted unless we try to design more specific attacks, as we do in the classical world.
 In this talk, I will give two examples of symmetric designs that can be attacked with specific quantum algorithms. I will first focus on the Even-Mansour construction, which turns a public permutation into a cipher. Even-Mansour has a strong algebraic structure, and, in fact, it is completely broken if the adversary is allowed « superposition queries ». Yet, this structure seems difficult to exploit for an adversary making standard, classical queries, and this has been an open problem for a while. We will see how to improve significantly the quantum key-recovery on Even-Mansour using the « offline Simon’s algorithm »
(joint work with X. Bonnetain, A. Hosoyamada, M. Naya-Plasencia and Y. Sasaki).
Our second target will be multiple-encryption. It is a cipher obtained by composing independent ciphers in sequence, with independent keys. I will shortly introduce quantum merging algorithms (joint work with M. Naya-Plasencia) and their application to the exhaustive key search in multiple-encryption. These algorithms are quite generic and have many other applications in cryptography.
 

 

Séminaire C2 distanciel du 29 janvier 2021

Le séminaire C2 reprend en distanciel en ce début d’année 2021. Les informations de connexions seront mises à jour en temps voulu sur le site. Le programme est le suivant :

  • 10h00 -Alain Couvreur, Inria Saclay et Lix Polytechnique : Sur la difficulté du problème d’équivalence de codes en métrique rang.
Abstract : Dans cet exposé, je discuterai du problème d’équivalence de codes en métrique de rang. Pour un code F_{q^m} linéaire, qui est le cas le plus étudié pour les codes en métrique de rang, nous prouvons que le problème peut être résolu en temps polynomial même dans le « pire des cas ». Le problème peut également se poser pour des espaces matriciels généraux. Dans ce dernier cas, je vous présenterai une réduction prouvant  que le problème est au moins aussi dur que l’équivalence de codes en métrique de Hamming.
Il s’agit d’une collaboration avec Thomas Debris-Alazard et Philippe Gaborit
  • 11h00 Olivier Bernard, Thales et Univ Rennes, CNRS, IRISA : Twisted-PHS: Using the Product Formula to Solve Approx-SVP in Ideal Lattices
Abstract : Approx-SVP is a well-known hard problem on lattices, which asks to find short vectors on a given lattice, but its variant restricted to ideal lattices (which correspond to ideals of the ring of integers O_K of a number field K) is still not fully understood. For a long time, the best known algorithm to solve this problem on ideal lattices was the same as for arbitrary lattice. But recently, a series of works tends to show that solving this problem could be easier in ideal lattices than in arbitrary ones, in particular in the quantum setting. Our main contribution is to propose a new “twisted” version of the PHS (by Pellet-Mary, Hanrot and Stehlé 2019) algorithm, that we call Twisted-PHS. As a minor contribution, we also propose several improvements of the PHS algorithm. On the theoretical side, we prove that our Twisted-PHS algorithm performs at least as well as the original PHS algorithm. On the practical side though, we provide a full implementation of our algorithm which suggests that much better approximation factors are achieved, and that the given lattice bases are a lot more orthogonal than the ones used in PHS. This is the first time to our knowledge that this type of algorithm is completely implemented and tested for fields of degrees up to 60.

 

Lien visio :

Participez à ma réunion depuis votre ordinateur, tablette ou smartphone.
https://global.gotomeeting.com/join/526194573

Vous pouvez aussi appeler à l’aide de votre téléphone.
France: +33 170 950 590

Code d’accès: 526-194-573

Rejoignez la réunion depuis une salle ou un système de vidéoconférence.
Composez ou tapez : 67.217.95.2 ou inroomlink.goto.com
ID réunion: 526 194 573
Ou appelez directement: 526194573@67.217.95.2 ou 67.217.95.2##526194573

Séminaire C2 du 15 novembre 2019 à l’INRIA Paris

La prochaine édition du séminaire C2 se tiendra le vendredi 15 novembre  2019 en salle Jaques-Louis Lions 2 de l’INRIA Paris, 2 ,rue Simone Iff,  Métro Ligne 6, arrêt Dugommier.

Exceptionnellement, le séminaire ne se déroulera que le matin, la thèse de Xavier Bonnetain se déroulant l’après-midi également à 14h00 :  Xavier Bonnetain, Inria Paris, Soutenance de thèse : Hidden Structures and Quantum Cryptanalysis

  • 10h00 -Geoffroy Couteau, IRIF, Université Paris-Diderot : Efficient Pseudorandom Correlation Generators: Silent OT Extension and More
Abstract : I will present recent developments in secure computation in the preprocessing model. In this model, a protocol is divided in two phases: a preprocessing phase, independent of the inputs, during which long correlated strings are generated and distributed among the parties; and a lightweight online phase, were the actual computation takes place. The generation of the preprocessing material forms the main efficiency bottleneck of all modern secure computation protocols, due to the very large communication, computation, and storage involved.
In this talk, I will discuss how a new cryptographic primitive, a pseudorandom correlation generator (PCG), can be used to considerably improve this state of affair. Informally, a PCG allows to stretch short, correlated keys into long correlated pseudorandom string of near-arbitrary length. This allows the preprocessing phase of all secure computation protocols to be executed with a tiny amound of communication (to generate the short correlated keys) followed only by local operations (to stretch correlated strings from these keys). I will discuss how to formalize this primitive, and describe a way to construct PCGs for an important class of correlations from the hardness of syndrome decoding. In turns, this results in considerable improvements for state-of-the-art secure computation, both from an asymptotic point of view and in practice.
  • 11h15 – Pierre Karpman, Université Grenoble Alpes : High-order private multiplication in characteristic two revisited
Abstract : We revisit the high-order masking schemes for private multiplication introduced by Belaïd et al. at EUROCRYPT 2016, and the matrix model for non-interference (NI) security that they develop in their follow-up work of CRYPTO 2017. This leads to two main results.
1) We generalise the theorems of CRYPTO 2017 so as to be able to apply them to masking schemes over any finite field — in particular GF(2) — and to be able to analyse the strong non-interference (SNI) security notion. This leads to an efficient algorithm that allows us to computationally check the (S)NI security of concrete multiplication gadgets over GF(2) up to order 11.
2) We propose new SNI and NI multiplication gadgets over GF(2) (and any extension thereof) up to order 9 and 11 that moderately improve the randomness complexity of the schemes of Barthe et al. (EUROCRYPT 2017).

 

Séminaire CCA du vendredi 21 juin en salle 105 du LIP6

La prochaine édition du séminaire C2 se tiendra le vendredi 21 juin  2019 dans la salle 105 couloir 25-26 du LIP6 de Sorbonne Université, métro Jussieu.

  • 10h00 – Thomas Debris, INRIA Paris : Wave: a New Family of Trapdoor One-Way PSF Based on Codes.

(joint work with Nicolas Sendrier and Jean-Pierre Tillich)

We present here a new family of trapdoor one-way Preimage Sampleable Functions (PSF) based on codes, the Wave-PSF family. The trapdoor function is one-way under two computational assumptions: the hardness of generic decoding for high weights and the indistinguishability of generalized $\UV$-codes.
Our proof follows the GPV strategy \cite{GPV08}.  By including rejection sampling, we ensure the proper distribution for the trapdoor inverse output. The domain sampling property of our family is ensured by using and proving a variant of the left-over hash  lemma.  We instantiate the new Wave-PSF family with ternary generalized $\UV$-codes to design a « hash-and-sign » signature scheme which achieves {\em existential unforgeability under adaptive chosen message attacks} (EUF-CMA) in the random oracle model. For 128 bits of classical security, signature sizes are in the order of 13 thousand bits, the public key size in the order of 3 megabytes, and the rejection rate is limited to one rejection every 10 to 12 signatures.
  • 11h15 – Adrien Hauteville, INRIA Saclay : Durandal: a rank metric based signature scheme.

Joint work with Nicolas Aragon, Olivier Blazy, Philippe Gaborit and Gilles Zémor

Abstract : These last years, the interest for post-quantum cryptography has strongly increased, especially since the NIST call for post-quantum cryptosystems standardization. Code-based cryptography is a good candidate in this domain.
The Hamming metric is well-known for several years, however rank metric is a newer topic but possesses many advantages in term of keysize or security hypothesis.
This presentation introduces the Rank-based cryptography and the new signature scheme Durandal, which is a variation of the Lyubashevsky approach in Euclidean lattices.
  • 13h45 – Aurore Guillevic, LORIA :  A first step toward an implementation of the Tower Number Field Sieve: selecting polynomials

Joint work with Shashank Singh, IISER Bhopal, India

Abstract : In this work, we provide a first step toward a practical implementation of the Tower Number Field Sieve. This algorithm is expected to compute more efficiently discrete logarithms in finite fields GF(p^n), where p is large prime number (hundreds of bits) and n is a small composite integer (n=6,8,12 for example).
The NFS-like algorithms for discrete logarithm computation are made of four main steps: polynomial selection (two polynomials define two number fields), relation collection (collecting a large set of multiplicative relations of elements from these two number fields), linear algebra (computing the right kernel of a large sparse matrix) and finally, computing the discrete logarithm of a given target in GF(p^n) w.r.t. a generator.
We generalize to the new Tower setting the ranking formula used in the first step of NFS that selects the best parameters (polynomials) among a large set. For this we re-define the $\alpha$ function and write an exact implementation (Magma and SageMath). This function measures the bias in the smoothness probability of norms in number fields compared to random integers of the same size.
We use it to estimate the yield of polynomials, that is the expected number of relations, as a generalization of Murphy’s $E$ function, and finally the total amount of operations needed to compute a discrete logarithm with the Tower-NFS in the targeted fields.
As another application, this can be used to refine the earlier work of Barbulescu and Duquesne on estimating the running-time of the algorithm. We apply our estimates to some pairing-friendly curves, whose related pairing target field is GF(p^n), for some small composite n. For each curve, we generate curve parameters for p of increasing size every 32 bits, and p^n from 3000 bits to 12000 bits, in order to have more intuition on how does the Tower-NFS algorithm scale for increasing size of p.
  • 15h00 : Leonardo Colo, Université d’Aix-Marseille : Orienting supersingular isogeny graphs
Abstract : Supersingular isogeny graphs have been used in the Charles–Goren–Lauter cryptographic hash function~\cite{CGL2006} and the supersingular isogeny Diffie–Hellman (SIDH) protocole of De\,Feo and Jao~\cite{JDF2011}. A recently proposed alternative to SIDH is the commutative supersingular isogeny Diffie–Hellman (CSIDH) protocole, in which the isogeny graph is first restricted to $\FF_p$-rational curves $E$ and $\FF_p$-rational isogenies then oriented by the quadratic subring $\ZZ[\pi] \subset \End(E)$ generated by the Frobenius endomorphism $\pi: E \rightarrow E$.
We introduce a general notion of orienting supersingular elliptic curves and their isogenies, and use this as the basis to construct a general oriented supersingular isogeny Diffie-Hellman (OSIDH) protocole.
By imposing the data of an orientation by an imaginary quadratic ring $\OO$, we obtain an augmented category of supersingular curves on which the class group $\Cl(\OO)$ acts faithfully and transitively. This idea is already implicit in the CSIDH protocol, in which supersingular curves over $\FF_p$ are oriented by the Frobenius subring $\ZZ[\pi] \simeq \ZZ[\sqrt{-p}]$. In contrast we consider an elliptic curve $E_0$ oriented by a CM order $\OO_K$ of class number one. To obtain a nontrivial group action, we consider $\ell$-isogeny chains, on which the class group of an order $\OO$ of large index $\ell^n$ in $\OO_K$ acts, a structure we call a whirlpool. The map from $\ell$-isogeny chains to its terminus forgets the structure of the orientation, and the original base curve $E_0$, giving rise to a generic supersingular elliptic curve. Within this general framework we define a new oriented supersingular isogeny Diffie-Hellman (OSIDH) protocol, which has fewer restrictions on the proportion of supersingular curves covered and on the torsion group structure of the underlying curves. Moreover, the group action can be carried out effectively solely on the sequences of moduli points (such as $j$-invariants) on a modular curve, thereby avoiding expensive isogeny computations, and is further amenable to speedup by precomputations of endomorphisms on the base curve $E_0$.