Séminaire C2 du 4 octobre 2024

10:00.  Henry Bambury (ENS Paris, Inria and DGA). Provably Reducing Near-Hypercubic Lattices.

Abstract: A recurring question in estimating parameters for lattice-based cryptography is the following: what is the largest dimension in which an attacker can be allowed to solve SVP in order for the scheme to remain secure? This question comes in two flavours: provable or heuristic. While we expect heuristic algorithms to perform better in practice, Ducas (DCC 2023) proposed a provable algorithm for reducing the hypercubic (Z^n) lattice that only requires SVP oracles in dimension n/2, bridging the gap between what is provable and what is heuristic. In this talk, we show that Ducas’ algorithm can be relaxed to sqrt(2)-approximate-SVP oracles, and provide an algorithm that reduces most n-dimensional NTRU lattices using SVP oracles in dimension n/2. This answers a conjecture of Gama, Howgrave- Graham and Nguyen from Eurocrypt 2006. If time allows, we compare the (provable) results with (heuristic) asymptotics for the primal attack.

Joint work with Phong Q. Nguyen

11:00 Julien Devevey (ANSSI).G+G: A Fiat-Shamir Lattice Signature based on Convolved Gaussians.

Abstract: Adapting Schnorr’s signature to the lattice setting has been a long standing problem. It was solved by Lyubashevsky in 2009 by introducing rejection sampling and the Fiat-Shamir with Aborts framework, which adds a layer of complexity to the scheme, as underlined by the fact that no correct proof of the framework was given before [D. et al., Crypto2023, Barbosa et al., Crypto 2023]. Getting rid of rejection sampling is possible, but  comes at a significant increase in signature size due to the use of flooding techniques (see, e.g., [Damgard et al., Crypto 2012]). In this talk we propose a new adaptation, which relies on Gaussian convolution rather than  flooding or rejection sampling as previous approaches. It does not involve any abort, can be proved secure in the ROM and QROM using existing analyses of the Fiat-Shamir transform, and enjoys smaller signature sizes  (both asymptotically and for concrete security levels).

Joint work with Damien Stehlé and Alain Passelègue

13:45. Geoffroy Couteau (CNRS, IRIF, Université Paris Cité). Fast Public-Key Silent OT and More from Constrained Naor-Reingold.

Abstract: Pseudorandom Correlation Functions (PCFs) allow two parties, given correlated evaluation keys, to locally generate arbitrarily many pseudorandom correlated strings, e.g. Oblivious Transfer (OT) correlations, which can then be used by the two parties to jointly run secure computation protocols.

In this work, we provide a novel and simple approach for constructing PCFs for OT correlation, by relying on constrained pseudorandom functions for a class of constraints containing a weak pseudorandom function (wPRF). We then show that tweaking the Naor-Reingold pseudorandom function and relying on low-complexity pseudorandom functions allow us to instantiate our paradigm. We further extend our ideas to obtain efficient public-key PCFs, which allow the distribution of correlated keys between parties to be non-interactive: each party can generate a pair of public/secret keys, and any pair of parties can locally derive their correlated evaluation key by combining their secret key with the other party’s public key.

In addition to these theoretical contributions, we detail various optimizations and provide concrete instantiations of our paradigm relying on the Boneh-Ishai-Passelègue-Sahai-Wu wPRF and the Goldreich-Applebaum-Raykov wPRF. Putting everything together, we obtain public-key PCFs with a throughput of 15k-40k OT/s, which is of a similar order of magnitude to the state-of-the-art interactive PCFs and about 4 orders of magnitude faster than state-of-the-art public-key PCFs.

As a side result, we also show that public-key PCFs can serve as a building block to construct reusable designated-verifier non-interactive zero-knowledge proofs (DV-NIZK) for NP. Combined with our instantiations, this yields simple and efficient reusable DV-NIZKs for NP in pairing-free groups.

Joint work with Dung Bui, GC, Pierre Meyer, Alain Passelègue, and Mahshid Riahinia


14:45.  Maxime Bombar (université de Bordeaux). From Structured Codes to Secure Computation, and Back Again.


Abstract: It is well-known that random linear codes are hard to decode, even with the help of quantum computers. In particular, they offer a compelling alternative to lattice-based post-quantum cryptography which has just been standardised. However, this traditional cryptography only protects data in transit, while a growing need today is to perform computations on that data. If lattices also enable (fully) homomorphic encryption, this still requires heavy computations which may not be suitable for all situations. In particular, when data is distributed among multiple users, secure MultiParty Computation (MPC) can offer a more practical solution. However, the main bottleneck in MPC protocols is often the cost of communications between all parties. Fortunately, it has long been known that extremely efficient online MPC protocols do exist, provided that all parties have access to a trusted source of correlated randomness, independent of the protocol’s inputs. Therefore, building efficient MPC protocols is reduced to distributing a large amount of correlated randomness to all the parties. To this end, new primitives known as Pseudorandom Correlation Generators (PCG) have been recently developed based on the hardness of decoding random linear codes having a specific algebraic structure. These are now considered to be one of the most promising approaches for performing MPC in this so-called “correlated randomness model”.

Séminaire C2 du 5 juillet 2024

  • 10:00. Martino Borello, Université Paris 8 – LAGA

The geometry of linear codes and some recent applications

It is well-known that a nondegenerate linear code of length n and dimension k can be associated with a set of n points (with multiplicities) in a projective space of dimension k-1. Some coding-theoretical properties can be interpreted geometrically. This perspective connects MDS codes to problems involving arcs in projective spaces (the famous MDS conjecture was initially formulated as a problem in projective geometry by Segre), covering problems to saturating sets, minimal codes to strong blocking sets, and so on. In this talk, we will illustrate some recent results obtained by using this geometrical approach for Hamming-metric codes and outline how this can be generalized to other metrics, such as the rank and sum-rank metrics.

La géométrie des codes linéaires et quelques applications récentes

Il est bien connu qu’un code linéaire non dégénéré de longueur n et de dimension k peut être associé à un ensemble de n points (avec multiplicités) dans un espace projectif de dimension k−1. Certaines propriétés des codes peuvent être interprétées géométriquement. Cette perspective relie les codes MDS aux problèmes impliquant des arcs dans les espaces projectifs (la fameuse conjecture MDS a été initialement formulée comme un problème de géométrie projective par Segre), les problèmes de recouvrement aux ensembles saturants, les codes minimaux aux ensembles bloquants forts, etc. Dans cette présentation, nous illustrerons certains résultats récents obtenus en utilisant cette approche géométrique pour les codes en métrique de Hamming et nous esquisserons à la fin comment cela peut être généralisé à d’autres métriques, telles que les métriques rang et somme-rang.

  • 11:00. Antonin Leroux, DGA-MI et Université de Rennes

SQIsign2D-West The Fast, the Small, and the Safer

In this talk, we will present SQIsign2D-West, a variant of SQIsign using two-dimensional isogeny representations. SQIsignHD was the first variant of SQIsign to use higher dimensional isogeny representations with an unpractical eight-dimensional variant geared towards provable security, and a four-dimensional variant geared towards efficiency. In particular, it has significantly faster signing times than SQIsign, but slower verification owing to the complexity of the four dimensional representation.
A dimension 2 variant was already mentioned at the time but several obstacles remained to make it possible.
In this work, we introduce new algorithmic tools that make two-dimensional representations a viable alternative. These lead to a signature scheme with sizes comparable to SQIsignHD, slightly slower signing than SQIsignHD but still much faster than SQIsign, and the fastest verification of any known variant of SQIsign. We achieve this without compromising on the security proof: the assumptions behind SQIsign2D-West are similar to those of the eight-dimensional variant of SQIsignHD. Additionally, like SQIsignHD, SQIsign2D-West favourably scales to high levels of security. Concretely, for NIST level I we achieve signing times of 80 ms and verifying times of 4.5 ms, using optimised arithmetic based on intrinsics available to the Ice Lake architecture. For NIST level V, we achieve 470 ms for signing and 31 ms for verifying.

  • 13:45. Cécile Pierrot, Inria / CNRS / Loria / Université de Lorraine

Y a-t-il encore, au XXI° siècle, de vieux documents historiques à déchiffrer ?

Aussi surprenant que cela puisse paraitre, oui. Il arrive que, dans un carton d’archives, l’historien butte soudainement sur un document chiffré resté tel qu’il a été envoyé. L’étonnant n’est pas de trouver ce genre de missive, et vous le savez-bien, nous chiffrons depuis des millénaires. Non, ce qui surprend, c’est que le destinataire officiel, soit n’ait pas procédé au déchiffrement, soit n’ait pas conservé et joint celui-ci. Les clefs des chiffres, également, ne sont pratiquement jamais jointes dans les archives, et si nous en trouvons encore avec émotion, elles ne comportent que rarement les mentions de leur propriétaire. De ce fait, de très nombreux documents historiques demeurent inexploités, car incompréhensibles. Ajoutez à ceci l’absence, pour le moment, de méthodes d’intelligence artificielle ad hoc capables de transcrire les dizaines de pages chiffrés par des glyphes propres à chaque auteur, ainsi que la perte des techniques de cryptanalyse au cours du temps, et enfin la difficulté à faire le pont entre plusieurs disciplines (histoire, linguistique, cryptographie, intelligence artificielle) ; vous comprendrez alors le chemin qu’il nous reste à parcourir. Cet exposé présentera des collaborations en cours à propos de lettres et de télégrammes du XVI° au XIX° siècle. Je vous emmènerai de l’Europe de Charles Quint à la guerre d’indépendance des Etats-Unis, en passant par les troubles post-indépendance du Brésil.

In the 21st century, are there still old historical documents to be deciphered?

Surprisingly, yes. Sometimes, in a box of archives, the historian suddenly comes across an encrypted document in the form in which it was sent. It’s not surprising to find this kind of missive – as you well know, we’ve been encrypting documents for thousands of years. No, what is surprising is that the official recipient either did not decrypt it, or did not keep it and attach it. The keys to the ciphers are almost never included in the archives, and if we still find some with emotion, they rarely include the name of their owner. As a result, many historical documents remain unexploited because they are incomprehensible. Add to this the absence, for the moment, of ad hoc artificial intelligence methods capable of transcribing the dozens of pages encrypted by glyphs specific to each author, as well as the loss of cryptanalysis techniques over time, and finally the difficulty of bridging several disciplines (history, linguistics, cryptography, artificial intelligence), and you will understand the road we still have to travel. This talk will present current collaborations on letters and telegrams from the 16th to the 19th century. I’ll be taking you from the Europe of Charles V to the American War of Independence, via the post-independence troubles in Brazil.

  • 14:45. Jules Baudrin, Inria Paris.

Geometrical structures among known APN functions

An almost perfect non-linear (APN) function is a function offering optimal resistance against differential cryptanalysis, which is among the most powerful techniques to attack or assess the security of a block cipher. APN functions have therefore been carefully analyzed for many years but still remains under a cloud. As an example, only a few generic constructions of such functions are known, and connections between them are not well understood. Among them, the simpler APN monomials however seem to play an important role. In this presentation, we will first introduce all the necessary background knowledge, before moving on to our latest results. Those results bridge part of the gaps between the known constructions by observing that many of the infinite families of quadratic APN functions share a common geometrical structure known as “the subspace property”. This property was first observed on one of the most enigmatic APN function, the so-called “Kim mapping”, which is the only known APN function, with an even number of variables, that is CCZ-equivalent to a bijection. While attempts to find new APN functions having the subspace property were already carried out, this property was not clearly understood. We therefore clarify its nature by in particular pointing out its link with cyclotomic mappings, which generalize monomial functions. Joint work with Anne Canteaut & Léo Perrin.

Séminaire C2 du vendredi 27 janvier à Lyon

La prochaine édition du séminaire se tiendra le vendredi 27 janvier à ENS de Lyon, site Descartes, bâtiment D8 Buisson, salle D01.
L’entrée du bâtiment D8 Buisson se situe au 19 allée de Fontenay, 69007 Lyon, à 3 minutes de la station Debourg du métro B (ligne directe depuis la gare de Part-Dieu – le métro se situe à l’extérieur de la gare, côté centre-ville (il faut sortir du côté de la voie A). Un accueil se situe derrière la porte et l’entrée est visible depuis la salle D01 qui se situe juste en face.

    • 10h45 : Johanna Loyer, INRIA Paris : Classical and quantum 3- and 4-sieves to solve SVP with low memory

Abstract: The Shortest Vector Problem (SVP) is at the foundation of lattice-based cryptography. The fastest known method to solve SVP in dimension d is by lattice sieving, which runs in time 2^{td+o(d)} with 2^{md+o(d)} memory for absolute constants t,m. Searching reduced vectors in the sieve is a problem reduced to the configuration problem, i.e. searching k vectors satisfying given constraints over their scalar products. In this work, we present a framework for k-sieve algorithms: we filter the input list of lattice vectors using a code structure modified from [BDGL16] to get lists centered around k codewords summing to the null-vector. Then, we solve a simpler instance of the configuration problem in the k filtered lists. Based on this framework, we describe classical sieves for k=3 and 4 that introduce new time-memory trade-offs. We also use the k-Lists algorithm [KPMP19] inside our framework, and this improves the time for k=3 and gives new trade-offs for k=4.

    • 12h : Victor Dyseryn, XLIM,Université de Limoges – On the security of the PSSI problem and the Durandal signature scheme (joint work with Nicolas Aragon and Philippe Gaborit)

Abstract: The Product Spaces Subspaces Indistinguishability (PSSI) was presented at EUROCRYPT 2019 and is one of the pillars of the security of the Durandal signature scheme, which is an efficient rank-metric signature scheme introduced in the same 2019 paper. In this work we present a new attack against the PSSI problem. It combines signatures two by two to recover subspaces containing products of a secret scalar with a public scalar and then uses a chain of intersections to recover the secret key. This new attack is efficient against a wide range of parameters. In the case of EUROCRYPT 2019 Durandal parameters, it would allow the recovery of the secret key in about 2^36 attempts (plus costs of linear algebra), using only a few dozens of signatures. We then investigate how to mitigate this new attack and propose new secure parameters for the Durandal signature scheme.

    • 14h30 : Bastien Pacifico, I2M, Aix-Marseille Université. Algorithmes de type Chudnovsky sur la droite projective pour la multiplication dans les corps finis.

Résumé : Il existe différents modèles permettant de mesurer la complexité d’un algorithme de multiplication dans une extension finie d’un corps fini.La complexité bilinéaire d’un tel algorithme est le nombre de multiplications bilinéaires, dépendant des deux éléments que l’on souhaite multiplier, qu’il exécute.La méthode de D.V. et G.V. Chudnovsky (1988) permet de construire des algorithmes ayant une bonne complexité bilinéaire. Ce sont des algorithmes d’évaluation/interpolation utilisant les points rationnels de courbes algébriques, i.e. les places rationnelles d’un corps de fonctions. Cette méthode a depuis été généralisée de différentes manières, et notamment à l’évaluation en des places de degrés arbitraires.Dans cet exposé, nous proposons une construction générique récursive d’algorithmes de type Chudnovsky sur le corps des fonctions rationnelles, évaluant sur des places de degrés croissants. Notre construction permet d’obtenir des algorithmes d’interpolation polynômiale constructibles en temps polynômial et ayant une complexité bilinéaire intéressante, pour un degré d’extension quelconque.D’autres part, nous verrons qu’il existe un profond lien entre ces algorithmes et les codes correcteurs d’erreurs géométriques.

Prépublication : https://arxiv.org/abs/2007.16082

    • 15h45 : Benjamin Wesolowski, CNRS & ENS de Lyon- Hard problems for isogeny-based cryptography

Abstract: Isogeny-based cryptography is one of the few branches of public-key cryptography
that promises to resist quantum attacks. The security of these cryptosystems relates to the
(presumed) hardness of a variety of computational problems: finding paths in large « isogeny
graphs », computing endomorphisms of elliptic curves, or inverting group actions. We present
these problems, analyse their hardness, and discuss their applications to post-quantum
cryptography.

Seminaire C2 du vendredi 7 octobre 2022

La prochaine édition du séminaire se tiendra le vendredi 7 octobre en salle Jacques-Louis Lions à l’INRIA Paris.

  • 10:00. Thomas Decru, imec-COSIC, KU Leuven, Belgium.

    An efficient key recovery attack on SIDH.

    Abstract: SIDH is an isogeny-based public key exchange protocol from 2011 conjectured to be resistant against attacks from both classical and quantum computers. SIKE, which is a key encapsulation based on SIDH, advanced to the 4th round of NIST’s post-quantum standardization process as alternative candidate for becoming a new standard for public key exchanges.

    In this talk we will discuss how the information from the SIDH protocol can be turned into an isogeny between scarce types of abelian surfaces, a so called “glue-and-split” isogeny in dimension two which stems from a theorem by Kani from 1997. This allows us to break the decisional version of the SIDH protocol, which in turn breaks the computational version. The attack requires certain assumptions, but is very practical and can break all the proposed NIST security levels of SIKE within a day on a regular desktop computer with one core.

    Joint work with Wouter Castryck.

  • 11:10. Alexandre Wallet, INRIA Rennes.

    Mitaka – a simpler, parallelizable, maskable variant of Falcon.

    Résumé : Falcon est un schéma de signature fondé sur les réseaux euclidiens, et récemment sélectionné par le NIST comme futur standard post-quantique. En consommation de bande passante, Falcon est actuellement la meilleure alternative, et ses performances sont proches de celle de l’autre schéma sélectionné, Dillithium. Cependant, les choix de design de Falcon imposent des limites conceptuelles freinant son utilisation dans différents scénarios : 1) les jeux de paramètres sont très restreints ; 2) implémenter le schéma est un exercice délicat ; 3) protéger son implémentation actuelle a un coût prohibitif en plus d’être difficile.

    Les facteurs limitants du schéma sont principalement liés à son échantillonneur de vecteurs Gaussiens dans des réseaux euclidiens. Notre variante Mitaka, qui suit le design général de Falcon, évite les écueils précédents en utilisant un échantillonneur différent, appelé « hybride », et qui avait été mis de côté en raison d’un niveau de sécurité estimé plus faible. De manière générale, nos travaux ont montré que la sécurité apportée par l’hybride pouvait être équivalente à celle de Falcon, et que sa simplicité conceptuelle permet de plus grands choix de design et de protection, à faible coût. L’exposé présentera des techniques et outils qui nous ont permis d’obtenir ces résultats pour Mitaka, et mentionnera d’autres récentes améliorations générales pour hash-then-sign sur des réseaux.

    L’exposé sera inspiré de plusieurs travaux en collaboration avec Thomas Espitau, Pierre-Alain Fouque, François Gérard, Melissa Rossi, Akira Takahashi, Mehdi Tibouchi et Yang Yu.

  • 13:45. Malika Izabachène, Cosmian, Paris, France. 

    Plug-and-play sanitization for TFHE.

    Résumé : Le chiffrement complètement homomorphe (FHE) permet d’évaluer une fonction arbitraire sur des messages chiffrés tout en préservant la confidentialité des messages. Cette propriété trouve de multiples applications, notamment lorsque l’on souhaite stocker des données sur le cloud tout en laissant la possibilité d’effectuer des calculs sur ces donnéesDans cet exposé, nous nous intéresserons à la confidentialité de l’algorithme opéré par le service du cloud.
    Une procédure d’assainissement (ou sanitization en anglais) d’un chiffré FHE garantit que toute l’information contenue dans ce chiffré, excepté le message associé, est détruite. En particulier, il est impossible de déterminer si des opérations, et lesquelles, ont été effectuées pour obtenir ce chiffré, même en connaissant la clé secrète.
    Nous verrons comment construire, à partir des bootstrappings de FHEW (Eurocrypt 2015) et TFHE (Asiacrypt 2016), une procédure qui permet d’apporter ces garanties de sécurité.

    Travail effectué en collaboration avec Florian Bourse.

  • 14:45. Loïc Rouquette, INSA Lyon, LIRIS, CITI.

    Recherche de Distingueurs Boomerangs à l’aide de la Programmation par Contraintes

    Résumé : La cryptanalyse différentielle est une des techniques d’attaque les plus efficaces en cryptographie symétrique, elle repose sur l’exploitation de certaines propriétés statistiques des chiffrements. Plus précisément, il s’agit d’observer, pour un chiffrement E, la probabilité d’obtenir une différence en sortie δout = EK(M) ⊕ EK(Mδin) lorsqu’une différence δin est injectée en entrée. En 1999, David Wagner (FSE99) met au point une variante de l’attaque différentielle, plus efficace que cette dernière dans certains cas, qui consiste à décomposer la fonction de chiffrement en deux sous-fonctions : EK = EK0 ∘ EK1.

    Afin de s’assurer que les différences, propagées dans les deux sous-fonctions EK0 et EK1, soient compatibles, Cid et co-auteurs (EuroCrypt18) ont introduit la notion de Boomerang Connectivity Table (BCT). Leur technique a ensuite été étendue et d’autres tables ont été introduites ; premièrement les tables UBCT et LBCT par Wang et Peyrin (ToSC19), puis la table EBCT par Delaune et co-auteurs (ToSC20).

    La complexité croissante des modèles découlant de ces techniques nécessite de mettre en oeuvre des techniques de résolution efficaces. Récemment les approches déclaratives (SAT, CP et (M)ILP) ont été appliquées avec succès à des problèmes de cryptanalyse différentielle.

    À l’aide de solveurs SAT et CP, nous proposons d’appliquer l’approche de résolution en deux étapes, qui commence par une analyse tronquée du chiffrement avant de passer à la version binaire, afin de calculer des caractéristiques différentielles boomerang sur Rijndael et Warp à partir des modèles de Delaune et co-auteurs (ToSC20) et de Gérault et co-auteurs (CP16). Dans un premier temps, nous montrons comment adapter le modèle de Delaune et co-auteurs (ToSC20) de Skinny à Rijndael et nous montrons également comment prendre en compte des différences dans la clé. Dans un second temps, nous montrons comment adapter la modélisation de Delaune et co-auteurs (ToSC20) aux schémas de Feistel et nous donnons nos résultats sur les chiffrements Warp, Twine et LBlock-S (ToSC22).

    Travail en commun avec Virgine Lallemand,  Marine Minier, Christine Solon

Séminaire C2 du vendredi 10 juin à Rennes

Exposés au séminaire C2 du vendredi 10 juin 2022

 

  •  Clara Pernot, INRIA Paris, New Representations of the AES Key Schedule

Résumé : In this talk we present a new representation of the AES key schedule, with some implications to the security of AES-based schemes. In particular, we show that the AES-128 key schedule can be split into four independent parallel computations operating on 32 bits chunks, up to linear transformation. Surprisingly, this property has not been described in the literature after more than 20 years of analysis of AES. We show two consequences of our new representation, improving previous cryptanalysis results of AES-based schemes. First, we observe that iterating an odd number of key schedule rounds results in a function with short cycles. This explains an observation of Khairallah on mixFeed, a second-round candidate in the NIST lightweight competition. Our analysis actually shows that his forgery attack on mixFeed succeeds with probability 0.44 (with data complexity 220GB), breaking the scheme in practice. The same observation also leads to a novel attack on ALE, another AES-based AEAD scheme.

Our new representation also gives efficient ways to combine information from the first subkeys and information from the last subkeys, in order to reconstruct the corresponding master keys. In particular we improve previous impossible differential attacks against AES-128.

This is a joint work with Gaëtan Leurent.

  • Thibauld Feneuil, CryptoExperts et Sorbonne Université –   Syndrome Decoding in the Head – Shorter Signatures from Zero-Knowledge proofs

Résumé : In this talk, I will present a new zero-knowledge proof of knowledge for the syndrome decoding (SD) problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. Specifically, we propose a 5-round zero-knowledge protocol that proves the knowledge of a vector x such that y=Hx and wt(x)<= w and which achieves a soundness error closed to 1/N for an arbitrary N.

While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for 128-bit security when relying on the hardness of the SD problem on binary fields. Using larger fields (like F_{256}), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and to be competitive with SPHINCS+, both post-quantum signature candidates in the ongoing NIST standardization effort. Since the security relies on the hardness of the syndrome decoding problem for random linear codes which is known to be NP-hard and for which the cryptanalysis state of the art has been stable for many years, it results in a conservative signature scheme. Moreover, our scheme outperforms all the existing code-based signature schemes for the common « signature size + public key size » metric.

Joint work with Antoine Joux and Matthieu Rivain.

  • Fangan Yssouf Dosso, Ecole des Mines de Saint-Etienne, Département SAS – PMNS for efficient arithmetic and small memory cost

Résumé : The Polynomial Modular Number System (PMNS) is an integer number system which aims to speed up arithmetic operations modulo a prime p. Such a system is defined by a tuple (p, n, g, r, E), where p, n, g and r are positive integers, E is a monic polynomial with integer coefficients, having g as a root modulo p. Most of the work done on PMNS focus on polynomials E such that E(X) = X^n – l, where l is a nonzero integer, because such a polynomial provides efficiency and low memory cost, in particular for l = 2 or -2.

It however appeared that these are not always the best choices. In this presentation, we first start with the necessary background on PMNS. Then, we highlight a new set of polynomials E for very efficient operations in the PMNS and low memory requirement, along with new bounds and parameters. We show that these polynomials are more interesting than (most) polynomials E(X)=X^n – l. To finish, we see how to use PMNS to randomise arithmetic operations in order to randomise high level operations like elliptic curve scalar multiplication, to protect implementations against some advanced side channel attacks like differential power analysis (DPA).

Joint work with Jean-Marc Robert, Nadia EL MRABET,  Laurent-Stéphane DIDIER et Pascal Véron

  • Ivan Pogildiakov,  IRMAR, Université Rennes 1 –  Binary codes, hyperelliptic curves, and the Serre bound

Résumé : Algebraic curves over finite fields have found nice applications in such areas as Coding theory and Cryptography. In that context, an object of this sort is meaningful only if its number of rational points is as close as possible to the celebrated Serre bound. However, the question regarding the existence of an algebraic curve having a prescribed number of rational points is of theoretical interest in its own right. There are different types of results in this direction in the literature such as, for example, providing explicit or non-explicit constructions in the case when either the definition field, or the genus, or the number of rational points varies.

In this talk, we propose a new approach for constructions of hyperelliptic curves over a fixed finite field of odd characteristic. As a main tool, a specific binary linear code defined by the field is used. A nice feature of the approach is that it allows one to translate the existence and the search problems of a hyperelliptic curve having desired parameters to that of a codeword of corresponding Hamming weight in a coset of the linear code. In some cases, by means of the machinery of the general theory of linear codes, we manage to find curves (having a fixed number of rational points) whose genera are reasonably close to the Serre bound.

For the sake of simplicity, the case of prime fields is mainly discussed. If time permits, as a by-product result, we will obtain new examples of curves with many points that improve upon some known lower bounds.

Séminaire C2 du vendredi 17 décembre à Rennes – Reporté cause COVID

Le prochain séminaire C2 se tiendra en présentiel le vendredi 17 décembre 2021 à l’université de Rennes 1 dans l’amphithéâtre Métivier de l’IRISA sur le campus de Beaulieu

Lien visio à venir

Le programme est le suivant

  • 9h30, Aurore Guillevic, Inria Nancy and Aarhus University – Elliptic curves for z-SNARKs

Joint work with Youssef El Housni, Consensys, Inria Saclay, ÉcolePolytechnique, and Diego Aranha, Aarhus University

Résumé : SNARK stands for Succinct Non-interactive ARgument of Knowledge and is a powerful tool in blockchains and cryptocurrencies. The most efficient at the moment is Groth’16 preprocessing SNARK, implemented with a pairing-friendly elliptic curve with a subgroup of prime order q of 255 bits (the BLS12-381 curve). It provides a zero-knowledge proof of an arithmetic circuit that performs arithmetic operations modulo q.

For a recursive SNARK, one applies again the strategy of arithmetisation and proof: a compiler produces an arithmetic circuit that performs some cryptographic computation (in Groth’16: computing pairings and checking an equation). This second circuit deals with arithmetic operations not modulo q but modulo p, the field of definition of the first curve. A second pairing-friendly elliptic curve is required, whose order should be this prime p: this is a chain of elliptic curves.

To achieve a recursive SNARK, and recursively prove proofs (a.k.a. bootstrapping proofs), in 2014, Ben-Sasson, Chiesa, Tromer, and Virza introduced a cycle of pairing-friendly curves: an MNT curve of prime
order q defined over a field GF(p), and a second MNT curve of prime order p defined over the prime field GF(q). That is, the field of definition of each curve is the order of the other curve. However, generating such curves takes a large computational effort: [BCTV14] spanned all curve discriminants up to 10^15 in 610 000 core-hours on a cluster to obtain two cycles: one of 298 bits, and a second one of 753 bits. Moreover, the pairing takes its values in a quartic extension for the first curve (1192 and 3012 bits respectively), which does not provide 128 bits of security anymore because of the TNFS algorithm computing discrete logarithms in extension fields (De Micheli et al. at Asiacrypt’21 present the first implementation). While a cycle of pairing-friendly curves is optimal for a recursive SNARK construction, it is very slow because of the many constraints on the curves. To date, no other cycle of pairing-friendly curves is known.

To circumvent the slowness of such cycles, one can instead use a chain of curves: the second (outer) curve has a subgroup of prime order being the field of definition of the first (inner) curve. Such a construction is much easier, and more pairing-friendly curves are possible. We generalise these constructions to families of SNARK-friendly 2-chains of pairing-friendly elliptic curves and compare their efficiency.

Finally, other cycles, half-cycles or chains of curves arise in variants of zero-knowledge proofs: cycles of non-pairing friendly curves such as Pallas-Vesta, half-pairing-friendly cycles such as Pluto-Eris. If time allows it, an overview of these alternative curves will be given.

  • 10h45, Alexandre Wallet, Univ Rennes, CNRS, IRISA –   Vers le déploiement des schémas « Hash-and-sign » basés sur les réseaux Euclidiens

Résumé : La compétition de standardisation post-quantique du NIST arrive bientôt à sa fin. Parmi les trois finalistes principaux retenus pour les signatures numériques, on retrouve la proposition Falcon. Ce schéma correspond à une instanciation efficace du paradigme « GPV » (Gentry-Peikert-Vaikunthanatan), permettant de proposer des schémas de signatures numériques reposant sur la difficulté de problèmes de réseaux euclidiens structurés. En terme de vitesse et particulièrement de bande passante, Falcon semble surclasser ses concurrents directs. Cependant, le schéma souffre de limites conceptuelles pouvant freiner son utilisation dans différents scénarios : 1) les jeux de paramètres sont très restreints ; 2) implémenter le schéma est un exercice délicat ; 3) protéger l’implémentation, au moins vis-à-vis des « timing attacks »,
impacte sérieusement les performances du schéma (en plus d’être, là encore, techniquement exigeant).Dans cet exposé, je décrirai plus en détails les briques limitantes du schéma, principalement liées à l’échantillonnage de vecteurs Gaussiens dans des réseaux euclidiens. En effet, dans le cas de Falcon, c’est cette étape algorithmique qui décide de la sécurité théorique et concrète du schéma, mais c’est aussi une source de fuites du point de vue des attaques par canaux auxiliaires. Je présenterai ensuite Mitaka, une proposition alternative suivant le design de Falcon, permettant de s’affranchir des écueils précédents. Ainsi, notre schéma Mitaka est 1) conceptuellement plus simple, et instanciable via de grandes familles de paramètres ; 2) sensiblement plus rapide pour une sécurité et une consommation de bande passante équivalente ; 3) peut être masqué à un ordre arbitraire pour un coût modeste et avec des techniques standards. Enfin, il est possible d’implémenter Mitaka sans avoir recourt à de l’arithmétique en virgule flottante.

L’exposé sera inspiré de plusieurs travaux en collaboration avec T. Espitau, P.A. Fouque, F. Gérard, P. Kirchner, M. Rossi, A. Takahashi, M. Tibouchi et Y. Yu.

  • 14h00 : Clara Pernot, INRIA Paris, New Representations of the AES Key Schedule

Résumé : In this talk we present a new representation of the AES key schedule, with some implications to the security of AES-based schemes. In particular, we show that the AES-128 key schedule can be split into four independent parallel computations operating on 32 bits chunks, up to linear transformation. Surprisingly, this property has not been described in the literature after more than 20 years of analysis of AES. We show two consequences of our new representation, improving previous cryptanalysis results of AES-based schemes. First, we observe that iterating an odd number of key schedule rounds results in a function with short cycles. This explains an observation of Khairallah on mixFeed, a second-round candidate in the NIST lightweight competition. Our analysis actually shows that his forgery attack on mixFeed succeeds with probability 0.44 (with data complexity 220GB), breaking the scheme in practice. The same observation also leads to a novel attack on ALE, another AES-based AEAD scheme.

Our new representation also gives efficient ways to combine information from the first subkeys and information from the last subkeys, in order to reconstruct the corresponding master keys. In particular we improve previous impossible differential attacks against AES-128.This is a joint work with Gaëtan Leurent.

  • 15h15 : Victor Mollimard,  Univ Rennes, CNRS, IRISA – Efficient Methods to Search for Best Differential Characteristics on SKINNY

Résumé : Evaluating resistance of ciphers against differential cryptanalysis is essential to define the number of rounds of new designs and to mount attacks derived from differential cryptanalysis. In this paper, we propose automatic tools to find the best differential characteristics on the SKINNY block cipher. As usually done in the literature, we split this search in two stages denoted by Step 1 and Step 2. In Step 1, we aim at finding all truncated differential characteristics with a low enough number of active Sboxes. Then, in Step 2, we try to instantiate each difference value while maximizing the overall differential characteristic probability. We solve Step 1 using an ad-hoc method inspired from the work of Fouque et al. whereas Step 2 is modelized for the Choco-solver library as it seems to outperform all previous methods on this stage. Notably, for SKINNY-128 in the SK model and for 13 rounds, we retrieve the results of Abdelkhalek et al. within a few seconds (to compare with 16 days) and we provide, for the first time, the best differential related-tweakey characteristics up to 14 rounds for the TK1 model. Regarding the TK2 and the TK3 models, we were not able to test all the solutions Step 1, and thus the differential characteristics we found up to 16 and 17 rounds are not necessarily optimal.

Séminaire C2 du vendredi 15 octobre 2021

Le prochain séminaire C2 se tiendra en présentiel le vendredi 15 octobre 2021 à l’INRIA Paris, 2 rue Simone Iff, Paris 12.

Le lien pour joindre la réunion en visio est :
https://univ-rennes1-fr.zoom.us/j/7058629792

Le programme est le suivant

  • 10h00, Katharina Boudgoust, IRISA, Université Rennes 1, Hardness of Module Learning With Errors With Small Secrets

Résumé: This talk focuses on the module variant of the Learning With Errors problem (Module-LWE), a fundamental computational problem used in lattice-based cryptography nowadays. To highlight its importance for practical schemes, note that several finalists of the NIST standardization project rely on the hardness of Module-LWE, e.g., the signature scheme Dilithium and the key encapsulation mechanism Kyber.

More concretely, in this talk we consider Module-LWE, where the underlying secret is of small norm. Whereas several works prove the hardness of unstructured LWE with short secrets, only few was known for the module case. In this presentation I summarize our results that previously appeared at Asiacrypt 2020 and CT-RSA 2021, where we investigate the hardness of Module-LWE when the secret has coefficients in the set {0,1}, also called binary Module-LWE. Furthermore, I show that both results naturally generalize to secrets with bounded coefficients. The talk concludes with some interesting open questions related to the hardness of Module-LWE with non-uniform secrets.

This is a joint work with Corentin Jeuy, Adeline Roux-Langlois and Weiqiang Wen.

  • 11h15, Magali Bardet, LITIS, Université de Rouen Normandie, Algebraic cryptanalysis of problems in code-based cryptography and applications

Abstract: Rank-metric code-based cryptography relies on the hardness of decoding a random linear code in the rank metric. This fundamental problem is called the Minrank problem, and is ubiquitous in rank metric (or even Hamming metric) code based cryptography as well as in multivariate cryptography. For structured instances arising in the former, their security rely on a more specific problem, namely the Rank Syndrome Decoding problem. There is also a generalization called the Rank Support Learning problem, where the attacker has access to several syndromes corresponding to errors with the same support. Those problems have various applications in code-based and multivariate cryptography (KEM and signature schemes), and a precise understanding of the complexity of solving them can help designers to create secure parameters.

In this talk, I will present the three problems and their relations to cryptographic schemes, their algebraic modeling and the recent improvements in the understanding of the complexity of solving those systems using algebraic techniques like Gröbner bases computations.

Joint works with Pierre Briaud, Maxime Bros, Daniel Cabarcas, Philippe Gaborit, Vincent Neiger, Ray Perlner, Olivier Ruatta, Daniel Smith-Tone, Jean-Pierre Tillich and Javier Verbel.

  • 13h45 : Antonin Leroux, Grace, LIX et INRIA Saclay–Île-de-France, Séta: Supersingular Encryption from Torsion Attacks

Résumé : We present Séta, a new family of public-key encryption schemes with post-quantum security based on isogenies of supersingular elliptic curves. It is constructed from a new family of trapdoor one-way functions, where the inversion algorithm uses Petit’s so called torsion attacks on SIDH to compute an isogeny between supersingular elliptic curves given an endomorphism of the starting curve and images of torsion points. We prove the OW-CPA security of S´eta and present an IND-CCA
variant using the post-quantum OAEP transformation. Several variants for key generation are explored together with their impact on the selection of parameters, such as the base prime of the scheme. We furthermore formalise an “uber” isogeny assumption framework which aims to generalize computational isogeny problems encountered in schemes including SIDH, CSDIH, OSIDH and ours. Finally, we carefully select parameters to achieve a balance between security and run-times and present experimental results from our implementation

  • 15h00 : Chloé Hébant, Cascade, ENS et INRIA Paris  Traceable Constant-Size Multi-Authority Credentials,

Abstract: Many attribute-based anonymous credential (ABC) schemes have been proposed allowing a user to prove the possession of some attributes, anonymously. They became more and more practical with, for the most recent papers, a constant-size credential to show a subset of attributes issued by a unique credential issuer. However, proving possession of attributes coming from K different credential issuers usually requires K independent credentials to be shown. Only attribute-based credential schemes from aggregatable signatures can overcome this issue.

In this paper, we propose new ABC schemes from aggregatable signatures with randomizable tags. We consider malicious credential issuers, with adaptive corruptions and collusions with malicious users. Whereas our constructions only support selective disclosures of attributes, to remain compact, our approach significantly improves the complexity in both time and memory of the showing of multiple attributes: for the first time, the cost for the prover is (almost) independent of the number of attributes and the number of credential issuers.

Whereas anonymous credentials require privacy of the user, we propose the first schemes allowing traceability.

We formally define an aggregatable signature scheme with (traceable) randomizable tags, which is of independent interest. We build concrete schemes from the recent linearly homomorphic signature scheme of PKC 20. As all the recent ABC schemes, our construction relies on signatures which unforgeability is proven in the bilinear generic group model.

Séminaire C2 du vendredi 4 juin 2021

Programme du séminaire du vendredi 4 juin 2021

    • 10h00 – Jade Nardi, INRIA, Preuves interactives de proximité d’un mot à un code géométrique
Résumé : Ces dernières années, d’énormes progrès ont été accomplis dans le domaine du calcul vérifiable. Celui-ci permet de fournir, en plus d’un résultat, une preuve courte et rapidement vérifiable que le calcul a été correctement effectué. Certaines de ces preuves peuvent également avoir une propriété « zero-knowledge », et sont aujourd’hui largement déployées dans des applications liées aux blockchains.
Parmi les différentes familles de schémas de calcul vérifiable, une famille de constructions repose fondamentalement sur des tests de proximité à des codes algébriques. Les constructions implémentées dans l’industrie utilisent un protocole interactif, appelé FRI (Fast Reed-Solomon Interactive Oracle Proof of Proximity), permettant de vérifier la proximité à des codes de Reed-Solomon en temps logarithmique (en la longueur du code).
Naturellement, on se demande si les codes algébriques, généralisation des codes de Reed-Solomon, admettent des tests de proximité avec des paramètres similaires. Dans cet exposé, on présentera le protocole FRI en détail, en prenant du recul sur les outils mathématiques qui font son efficacité. On décrira comment généraliser ces outils aux codes géométriques (qu’on prendra le temps d’introduire rapidement), quels obstacles apparaissent naturellement à cause de la géométrie des courbes non rationnelles et comment les surmonter en adaptant le protocole. On détaillera le cas de codes sur des extensions de Kummer.

 

    • 11h00 – David Pointcheval, ENS, Evaluation Secrète de Polynômes Vérifiable 
Résumé : L’évaluation secrète de polynômes (Oblivious Polynomial Evaluation) permet à Alice d’obtenir l’évaluation d’un polynôme connu de Bob seul sur une entrée secrète de son choix. Les applications sont nombreuses, telles que l’intersection d’ensembles privés (Private Set Intersection). Et dans ce cas, il est d’ailleurs nécessaire de s’assurer que le même polynôme est utilisé par Bob pour chaque évaluation.
Dans cet exposé, nous montrons comment prouver l’évaluation correcte d’un polynôme secret, mis en gage par Bob, sur une entrée chiffrée d’Alice, à l’aide de chiffrement complètement homomorphe (Fully Homomorphic Encryption). La preuve est compacte, et indépendante du degré du polynôme à évaluer.
Il s’agit d’un travail commun avec Paola de Perthuis, Malika Izabachène et Anca Nitulescu

Séminaire C2 du vendredi 19 mars 2021

Programme du séminaire

  • 10h00 -Mélissa Rossi, ANSSI, Assessing residual security of lattice-based cryptography
Abstract : This talk will present a framework for cryptanalysis of lattice-based schemes, when side information —in the form of «hints»— about the secret is available. This presentation outlines a joint work with Dana Dachman-Soled, Léo Ducas and Huijing Gong that was presented in CRYPTO 2020 (https://eprint.iacr.org/2020/292).
This framework generalizes the primal lattice reduction attack, and allows the progressive integration of hints before running a final lattice reduction step. The techniques for integrating hints include sparsifying the lattice, projecting onto and intersecting with hyperplanes, and/or altering the distribution of the secret vector. The main contribution is to propose a toolbox and a methodology to integrate such hints into lattice reduction attacks and to predict the performance of those lattice attacks with side information.
While initially designed for side-channel information, this framework can also be used in other cases. For example, one can simply exploit constraints imposed by certain schemes (LAC, Round5, NTRU). Besides, I will present a way to use this framework combined with decryption failures information using a joint work with Jan-Pieter D’Anvers and Fernando Virdia presented in EUROCRYPT 2020 (https://eprint.iacr.org/2019/1399).

 

  • 11h00André Schrottenloher, CWI Amsterdam, Quantum tools for symmetric cryptanalysis
Abstract : The security of (symmetric) cryptosystems is based on cryptanalysis: our confidence stems from an extensive and continuous public scrutiny. Meanwhile, some consequences of large-scale quantum computing devices in cryptography are well-known, for example, the quadratic speedup of Grover’s algorithm against exhaustive key search. But the post-quantum security of symmetric cryptography cannot be precisely asserted unless we try to design more specific attacks, as we do in the classical world.
 In this talk, I will give two examples of symmetric designs that can be attacked with specific quantum algorithms. I will first focus on the Even-Mansour construction, which turns a public permutation into a cipher. Even-Mansour has a strong algebraic structure, and, in fact, it is completely broken if the adversary is allowed « superposition queries ». Yet, this structure seems difficult to exploit for an adversary making standard, classical queries, and this has been an open problem for a while. We will see how to improve significantly the quantum key-recovery on Even-Mansour using the « offline Simon’s algorithm »
(joint work with X. Bonnetain, A. Hosoyamada, M. Naya-Plasencia and Y. Sasaki).
Our second target will be multiple-encryption. It is a cipher obtained by composing independent ciphers in sequence, with independent keys. I will shortly introduce quantum merging algorithms (joint work with M. Naya-Plasencia) and their application to the exhaustive key search in multiple-encryption. These algorithms are quite generic and have many other applications in cryptography.
 

 

Séminaire C2 distanciel du 29 janvier 2021

Le séminaire C2 reprend en distanciel en ce début d’année 2021. Les informations de connexions seront mises à jour en temps voulu sur le site. Le programme est le suivant :

  • 10h00 -Alain Couvreur, Inria Saclay et Lix Polytechnique : Sur la difficulté du problème d’équivalence de codes en métrique rang.
Abstract : Dans cet exposé, je discuterai du problème d’équivalence de codes en métrique de rang. Pour un code F_{q^m} linéaire, qui est le cas le plus étudié pour les codes en métrique de rang, nous prouvons que le problème peut être résolu en temps polynomial même dans le « pire des cas ». Le problème peut également se poser pour des espaces matriciels généraux. Dans ce dernier cas, je vous présenterai une réduction prouvant  que le problème est au moins aussi dur que l’équivalence de codes en métrique de Hamming.
Il s’agit d’une collaboration avec Thomas Debris-Alazard et Philippe Gaborit
  • 11h00 Olivier Bernard, Thales et Univ Rennes, CNRS, IRISA : Twisted-PHS: Using the Product Formula to Solve Approx-SVP in Ideal Lattices
Abstract : Approx-SVP is a well-known hard problem on lattices, which asks to find short vectors on a given lattice, but its variant restricted to ideal lattices (which correspond to ideals of the ring of integers O_K of a number field K) is still not fully understood. For a long time, the best known algorithm to solve this problem on ideal lattices was the same as for arbitrary lattice. But recently, a series of works tends to show that solving this problem could be easier in ideal lattices than in arbitrary ones, in particular in the quantum setting. Our main contribution is to propose a new “twisted” version of the PHS (by Pellet-Mary, Hanrot and Stehlé 2019) algorithm, that we call Twisted-PHS. As a minor contribution, we also propose several improvements of the PHS algorithm. On the theoretical side, we prove that our Twisted-PHS algorithm performs at least as well as the original PHS algorithm. On the practical side though, we provide a full implementation of our algorithm which suggests that much better approximation factors are achieved, and that the given lattice bases are a lot more orthogonal than the ones used in PHS. This is the first time to our knowledge that this type of algorithm is completely implemented and tested for fields of degrees up to 60.

 

Lien visio :

Participez à ma réunion depuis votre ordinateur, tablette ou smartphone.
https://global.gotomeeting.com/join/526194573

Vous pouvez aussi appeler à l’aide de votre téléphone.
France: +33 170 950 590

Code d’accès: 526-194-573

Rejoignez la réunion depuis une salle ou un système de vidéoconférence.
Composez ou tapez : 67.217.95.2 ou inroomlink.goto.com
ID réunion: 526 194 573
Ou appelez directement: 526194573@67.217.95.2 ou 67.217.95.2##526194573