scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Cryptology in 2021"


Journal ArticleDOI
TL;DR: This paper provides the specification of As Con -128 and Ascon -128a, and specifies the hash function Ascon-Hash, and the extendable output function As Con-Xof, and complements the specification by providing a detailed overview of existing cryptanalysis and implementation results.
Abstract: Authenticated encryption satisfies the basic need for authenticity and confidentiality in our information infrastructure. In this paper, we provide the specification of Ascon-128 and Ascon-128a. Both authenticated encryption algorithms provide efficient authenticated encryption on resource-constrained devices and on high-end CPUs. Furthermore, they have been selected as the “primary choice” for lightweight authenticated encryption in the final portfolio of the CAESAR competition. In addition, we specify the hash function Ascon-Hash, and the extendable output function Ascon-Xof. Moreover, we complement the specification by providing a detailed overview of existing cryptanalysis and implementation results.

68 citations


Journal ArticleDOI
TL;DR: In this paper, a unified view of the two-party and multi-party computation protocols based on oblivious transfer is presented, and a number of modifications and improvements to these earlier presentations, as well as full proofs of the entire protocol are presented.
Abstract: We present a unified view of the two-party and multi-party computation protocols based on oblivious transfer first outlined in Nielsen et al. (CRYPTO 2012) and Larraia et al. (CRYPTO 2014). We present a number of modifications and improvements to these earlier presentations, as well as full proofs of the entire protocol. Improvements include a unified pre-processing and online MAC methodology, mechanisms to pass between different MAC variants and fixing a minor bug in the protocol of Larraia et al. in relation to a selective failure attack. It also fixes a minor bug in Nielsen et al. resulting from using Jensen’s inequality in the wrong direction in an analysis.

31 citations


Journal ArticleDOI
TL;DR: This work provides a new security proof for the cryptographic core of TLS 1.3 in the random oracle model, and shows that by replacing the RSA-PSS scheme with a tightly secure scheme, one can obtain the first fully tightly secure TLS protocol.
Abstract: We consider the theoretically sound selection of cryptographic parameters, such as the size of algebraic groups or RSA keys, for TLS 1.3 in practice. While prior works gave security proofs for TLS 1.3, their security loss is quadratic in the total number of sessions across all users, which due to the pervasive use of TLS is huge. Therefore, in order to deploy TLS 1.3 in a theoretically sound way, it would be necessary to compensate this loss with unreasonably large parameters that would be infeasible for practical use at large scale. Hence, while these previous works show that in principle the design of TLS 1.3 is secure in an asymptotic sense, they do not yet provide any useful concrete security guarantees for real-world parameters used in practice. In this work, we provide a new security proof for the cryptographic core of TLS 1.3 in the random oracle model, which reduces the security of TLS 1.3 tightly (that is, with constant security loss) to the (multi-user) security of its building blocks. For some building blocks, such as the symmetric record layer encryption scheme, we can then rely on prior work to establish tight security. For others, such as the RSA-PSS digital signature scheme currently used in TLS 1.3, we obtain at least a linear loss in the number of users, independent of the number of sessions, which is much easier to compensate with reasonable parameters. Our work also shows that by replacing the RSA-PSS scheme with a tightly secure scheme (e.g., in a future TLS version), one can obtain the first fully tightly secure TLS protocol. Our results enable a theoretically sound selection of parameters for TLS 1.3, even in large-scale settings with many users and sessions per user.

28 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of FDI on financial development was investigated for the selected 102 Belt and Road Initiative countries on four continents: Asia, Europe, Africa, and Latin America.
Abstract: Foreign direct investment (FDI) is seen as a prerequisite for gaining and maintaining competitiveness. Simultaneously, the relationship between FDI and financial development (FD) has important implications for the researched economy and its competitiveness. This domain has not been sufficiently investigated, with diverse and contradictory findings evident in the literature. Therefore, this study investigates the effect of FDI on FD for the selected 102 Belt and Road Initiative countries on four continents: Asia, Europe, Africa, and Latin America. Based on data from 1990 to 2017, a set of quantitative techniques, including feasible generalized least squares, and augmented mean group techniques, were used in this study. Our findings indicate that FDI, trade openness, government consumption, and inflation have a statistically significant relationship with FD. FDI, trade openness, and government consumption increased FD in Asia, Europe, and Latin America but decreased in Africa. Inflation shows a negative influence on FD in all continents. Furthermore, the Dumitrescu–Harlin panel causality test confirms a two-way causality relationship among FDI, trade openness, and FD in Asia and Europe. In contrast, a unidirectional relationship exists between FDI and FD in Latin America. The income-wise results reveal that low- and middle-income countries attract more FDI than high-income countries due to high factor costs. These empirical results provide new insights for policymakers, presenting several policy implications for FD competitiveness in the reference regions.

28 citations


Journal ArticleDOI
TL;DR: In this article, the authors identify the key sources of competitive advantage of large enterprises by means of exploratory factor analysis, the statistical method of reducing the number of classifying empirical variables, i.e. discovering a structure in their interrelations.
Abstract: The competitive advantage of enterprises in the conditions of market economy is not generated merely by ensuring high quality products and services. Therefore, in their strategies, they need to involve elements such as corporate social responsibility. The aim of the paper is to identify the key sources of competitive advantage of large enterprises. In the empirical research, the hypothesis has been tested to determine if the application of corporate social responsibility by enterprises has a statistically significant effect on gaining competitive advantage in the market. The hypothesis is verified on the basis of the authors’ study of 253 large enterprises operating in Poland by means of exploratory factor analysis, the statistical method of reducing the number of classifying empirical variables, i.e. of discovering a structure in their interrelations. The procedure enabled the selection of the factors with the greatest statistical shares in explaining variability. To this end, the input space was rotated in accordance with the Varimax criterion, with the number of determined factors specified by means of the Kaiser criterion and Cattell’s scree test. The application of an exploratory factor analysis enabled the authors to construct an original factor model of sources of enterprise competitive advantage, with three factors identified: marketing, innovation activity and corporate social responsibility. This indicates that marketing activities, innovation activities and the application of corporate social responsibility are the key sources of competitive advantage in large enterprises operating in the market.

23 citations


Journal ArticleDOI
TL;DR: This analysis in the reductionist security framework uses a multi-stage key exchange security model, where each of the many session keys derived in a single TLS 1.3 handshake is tagged with various properties to establish session keys with their desired security properties under standard cryptographic assumptions.
Abstract: We analyze the handshake protocol of the Transport Layer Security (TLS) protocol, version 1.3. We address both the full TLS 1.3 handshake (the one round-trip time mode, with signatures for authentication and (elliptic curve) Diffie–Hellman ephemeral ((EC)DHE) key exchange), and the abbreviated resumption/“PSK” mode which uses a pre-shared key for authentication (with optional (EC)DHE key exchange and zero round-trip time key establishment). Our analysis in the reductionist security framework uses a multi-stage key exchange security model, where each of the many session keys derived in a single TLS 1.3 handshake is tagged with various properties (such as unauthenticated versus unilaterally authenticated versus mutually authenticated, whether it is intended to provide forward security, how it is used in the protocol, and whether the key is protected against replay attacks). We show that these TLS 1.3 handshake protocol modes establish session keys with their desired security properties under standard cryptographic assumptions.

23 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper showed that GPV-IBE is indeed post-quantum in the quantum random oracle model (QROM) and proved its security in the QROM.
Abstract: In (STOC, 2008), Gentry, Peikert, and Vaikuntanathan proposed the first identity-based encryption (GPV-IBE) scheme based on a post-quantum assumption, namely, the learning with errors (LWE) assumption. Since their proof was only made in the random oracle model (ROM) instead of the quantum random oracle model (QROM), it remained unclear whether the scheme was truly post-quantum or not. In (CRYPTO, 2012), Zhandry developed new techniques to be used in the QROM and proved security of GPV-IBE in the QROM, hence answering in the affirmative that GPV-IBE is indeed post-quantum. However, since the general technique developed by Zhandry incurred a large reduction loss, there was a wide gap between the concrete efficiency and security level provided by GPV-IBE in the ROM and QROM. Furthermore, regardless of being in the ROM or QROM, GPV-IBE is not known to have a tight reduction in the multi-challenge setting. Considering that in the real-world an adversary can obtain many ciphertexts, it is desirable to have a security proof that does not degrade with the number of challenge ciphertext.

20 citations


Journal ArticleDOI
TL;DR: A searchable symmetric encryption scheme that enables a client to store data on an untrusted server while supporting keyword searches in a secure manner and there are substantial gaps between the existing schemes and lower bounds is constructed.
Abstract: A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead, locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). These experiments motivated Cash and Tessaro (EUROCRYPT ’14) and Asharov et al. (STOC ’16) to construct SSE schemes offering various such tradeoffs, and to prove lower bounds for natural SSE frameworks. Unfortunately, the best-possible tradeoff has not been identified, and there are substantial gaps between the existing schemes and lower bounds, indicating that a better understanding of SSE is needed.

16 citations


Journal ArticleDOI
TL;DR: In this article, the authors extended the Cobb-Douglas function by including other competitiveness factors in a panel data framework based on the EU-28 countries in the period 2004-2018, and they found that GDP per capita variation is explained by human and physical capital, FDI, and R&D expenditure.
Abstract: During the past few decades, globalization has dramatically changed the context of competitiveness around the world. Considering the role of competitiveness in the development of the digital economy, this paper aims to highlight the role of innovation, foreign direct investment (FDI), and human capital in supporting competitive European economies. The research hypothesis is that FDI, innovation, and human capital contribute to competitiveness growth. The paper extends the Cobb-Douglas function by including other competitiveness factors in a panel data framework based on the EU-28 countries in the period 2004-2018. The results indicate that GDP per capita variation is explained by human and physical capital, FDI, and R&D expenditure. Human capital plays a crucial role in economic development due to the innovation skills of individuals, which improve the productivity of these factors. Capital formation also makes a positive contribution to economic growth. The empirical evidence suggests that the changes in the GDP per capita are explained by modifications in the labor force and capital formation, as is described in the traditional framework of the Cobb-Douglas function. R&D expenditure and FDI stock, however, also play a significant role. Moreover, human capital could determine the adoption of external technology by absorbing new equipment and ideas. On the other hand, the education index and capital formation showed a positive impact on GCI.

14 citations


Journal ArticleDOI
Shai Halevi1, Victor Shoup1
TL;DR: Gentry's bootstrapping technique is still the only known method of obtaining fully homomorphic encryption where the system's parameters do not depend on the complexity of the evaluated functions as mentioned in this paper, and it has been shown that it can handle packed ciphertexts that encrypt vectors of elements.
Abstract: Gentry’s bootstrapping technique is still the only known method of obtaining fully homomorphic encryption where the system’s parameters do not depend on the complexity of the evaluated functions. Bootstrapping involves a recryption procedure where the scheme’s decryption algorithm is evaluated homomorphically. So far, there have been precious few implementations of recryption, and fewer still that can handle “packed ciphertexts” that encrypt vectors of elements.

13 citations



Journal ArticleDOI
TL;DR: In this article, it was shown that Taniguchi's APN functions on the finite field have a lower bound of Ω(2.2m + 1.3m) for odd m. This is a great improvement of previous results: for even m, the best known lower bound has been $$\frac{\varphi (m)}{2}\left( \lfloor \frac{m}{4}\rfloor +1\right)
Abstract: Almost perfect nonlinear (APN) functions play an important role in the design of block ciphers as they offer the strongest resistance against differential cryptanalysis. Despite more than 25 years of research, only a limited number of APN functions are known. In this paper, we show that a recent construction by Taniguchi provides at least $$\frac{\varphi (m)}{2}\left\lceil \frac{2^m+1}{3m} \right\rceil $$ inequivalent APN functions on the finite field with $${2^{2m}}$$ elements, where $$\varphi $$ denotes Euler’s totient function. This is a great improvement of previous results: for even m, the best known lower bound has been $$\frac{\varphi (m)}{2}\left( \lfloor \frac{m}{4}\rfloor +1\right) $$ ; for odd m, there has been no such lower bound at all. Moreover, we determine the automorphism group of Taniguchi’s APN functions.

Journal ArticleDOI
TL;DR: The first reusable fuzzy extractor is constructed that makes no assumptions about how multiple readings of the source are correlated, building a computationally secure and an information-theoretically secure construction for large-alphabet sources.
Abstract: Fuzzy extractors (Dodis et al., in Advances in cryptology—EUROCRYPT 2014, Springer, Berlin, 2014, pp 93–110) convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors (Boyen, in Proceedings of the 11th ACM conference on computer and communications security, CCS, ACM, New York, 2004, pp 82–91) remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings (for example, when a single person’s biometric is enrolled with multiple unrelated organizations). We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated. The extractor works for binary strings with Hamming noise; it achieves computational security under the existence of digital lockers (Canetti and Dakdouk, in Advances in cryptology—EUROCRYPT 2008, Springer, Berlin, 2008, pp 489–508). It is simple and tolerates near-linear error rates. Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates—lower than those supported by prior (nonreusable) constructions—assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. Structure beyond entropy is necessary to support distributions with low entropy rates. We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, building a computationally secure and an information-theoretically secure construction for large-alphabet sources.

Journal ArticleDOI
TL;DR: In this paper, the authors identify the specifics of supplier-customer relationships in engineering which respond to the current trends and find out how Czech engineering companies have implemented specific elements of Industry 4.0.
Abstract: The article deals with the implementation of Industry 4.0 elements in Czech engineering companies in connection with the impact of this trend on the relationship between supplier and customer. The implementation of Industry 4.0 elements can have a positive effect on the relationship between supplier and customer through higher labour productivity, higher product quality as well as shorter production or delivery times. Industry 4.0 brings great opportunities for companies, which can mean greater efficiency and competitiveness; on the other hand, there are questions about whether companies are ready for it, i.e. whether there is sufficient infrastructure necessary to put Industry 4.0 into practice. The aim of this article is to identify the specifics of supplier-customer relationships in engineering which respond to the current trends and to find out how Czech engineering companies have implemented specific elements of Industry 4.0. No study of this kind has ever been conducted in the environment of Czech engineering companies. Based on the analysis of primary data obtained from 236 Czech engineering companies, the current trends in the management of relations between suppliers and customers are described; Czech engineering companies can use our results to increase their competitiveness. Emphasis is placed on Industry 4.0, planned investments in this infrastructure and the implementation of individual elements. Of the elements of Industry 4.0, Czech engineering companies mostly use tools and methods ensuring data security, automation of technological equipment and processes, cloud computing, mass customization and introducing sensors into production. Our results show that the investment in the necessary infrastructure is mainly related to the size of the company, with almost half of the large companies surveyed planning to invest in the infrastructure necessary to implement Industry 4.0 elements, while 46% of micro-enterprises do not plan to invest in Industry 4.0 elements.

Journal ArticleDOI
TL;DR: In this article, the authors identify a vulnerability in the TLS 1.3 option by showing a new reflection attack that they call "Selfie", which leverages the fact that TLS does not mandate explicit authentication of the server and the client, and leverages it to break the protocol's mutual authentication property.
Abstract: TLS 1.3 allows two parties to establish a shared session key from an out-of-band agreed pre-shared key (PSK). The PSK is used to mutually authenticate the parties, under the assumption that it is not shared with others. This allows the parties to skip the certificate verification steps, saving bandwidth, communication rounds, and latency. In this paper, we identify a vulnerability in this specific TLS 1.3 option by showing a new “reflection attack” that we call “Selfie.” This attack uses the fact that TLS does not mandate explicit authentication of the server and the client, and leverages it to break the protocol’s mutual authentication property. We explain the root cause of this TLS 1.3 vulnerability, provide a fully detailed demonstration of a Selfie attack using the TLS implementation of OpenSSL, and propose mitigation. The Selfie attack is the first attack on TLS 1.3 after its official release in 2018. It is surprising because it uncovers an interesting gap in the existing TLS 1.3 models that the security proofs rely on. We explain the gap in these model assumptions and show how it affects the proofs in this case.

Journal ArticleDOI
TL;DR: A framework for the implementation of BPM in Slovak SMEs based on research on transportation SMEs is introduced based on the findings of previous research studies along with the results of own questionnaire surveys and personal meetings/interviews with owners/managers of Transportation SMEs.
Abstract: Business performance management (BPM) is an instrument that allows the fulfillment of business objectives and the improvement of competitiveness in small and medium-sized enterprises (SMEs). When BPM is implemented and measured, it can improve the sustainability and competitiveness of the enterprise. Despite its potential benefits, the possibilities of BPM in SMEs are often underestimated due to the lack of resources (mainly personal or financial). The goal of this paper is to introduce a framework for the implementation of BPM in Slovak SMEs based on research on transportation SMEs. To this end, certain steps that support the efficient introduction and use of BPM in these SMEs will be proposed. Our proposal regarding the performance of BPM is based on the findings of previous research studies along with the results of own questionnaire surveys and personal meetings/interviews with owners/managers of transportation SMEs. The results of this research shows that SMEs are generally not familiar with BPM and how the system is used. Essential elements to implement BPM are lacking in SMEs, and although the BPM system might help these firms improve their competitiveness, SMEs remain uniformed about the advantages of BPM. The proposed framework of BPM implementation in this paper can be used to inform SMEs and to assist them in their decisionmaking processes regarding the application of this system.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the moderating role of this dark side in the relationship between network embeddedness and the innovation performance of SMEs, and the role of relationship ending capability in neutralizing the negative effect of dark side.
Abstract: A key driver of firm competitive advantage is the firm’s ability to develop along with the everchanging business environment and associated market demands by being innovative. Small and medium-sized enterprises (SMEs), however, often lack adequate resources to develop innovation, so they search for external resources to augment the deficiency of their internal resources. Network embeddedness has many advantages for the firm, but it also has a dark side which has a negative effect on the network relationship. In order to take advantage of a network, firms should cultivate the capability to deal with the dark side of inter-firm relationships. Firstly, this study assesses the effect of network embeddedness on the innovation performance of SMEs. Secondly, the authors investigated the moderating role of this dark side in the relationship between network embeddedness and the innovation performance of SMEs. Finally, the role of relationship ending capability in neutralizing the negative effect of dark side we presented. Empirical analysis was based on 388 SMEs. Various validity and reliability checks were conducted before the presentation of the analysis itself, which was conducted using the ordinary least squares approach in SPSS (v.23). The findings showed the dark side negatively moderated the relationship between network embeddedness and the innovation performance of SMEs. This negative effect is, however, reduced by SMEs with a high relationship ending capability by freeing up firm’s limited resources for more fruitful business relationships.


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the relationship between marketing communication tools and consumer perceived value in pursuit of consumer loyalty and found that the greatest and strongest relationship in consumer value creation is the appropriate, mutually coordinated and complementary use of a package of marketing communication communication tools to achieve synergies that create the preconditions for increasing consumer loyalty in a competitive market.
Abstract: The situation in the markets is changing rapidly and competition in the business sector is increasing rapidly. As a result, corporate marketing decisions are based on creating greater value for the consumer, which creates competitiveness and provides an advantage in competing for future customer loyalty. The purpose of this study is to determine whether there is a link between marketing communication tools and consumer perceived value in pursuit of consumer loyalty. Qualitative (observational research) and quantitative (a questionnaire survey) research methods were used to investigate the problem empirically. The observational research elucidated the value provided to the consumer by the research objects through marketing communication tools, supplementing the key questions for the quantitative study. Correlation and regression analysis were used in the study, with the results showing a statistically significant relationship between marketing communication tools and consumer perceived value in terms of user loyalty. It has also been determined that the greatest and strongest relationship in consumer value creation through marketing communication tools is the appropriate, mutually coordinated and complementary use of a package of marketing communication tools to achieve synergies that create the preconditions for increasing consumer loyalty in a competitive market.

Journal ArticleDOI
TL;DR: Bloom filter encryption (BFE) as discussed by the authors is a new primitive that is derived from the probabilistic Bloom filter data structure, which can achieve both 0-RTT and full forward secrecy.
Abstract: Forward secrecy is considered an essential design goal of modern key establishment (KE) protocols, such as TLS 1.3, for example. Furthermore, efficiency considerations such as zero round-trip time (0-RTT), where a client is able to send cryptographically protected payload data along with the very first KE message, are motivated by the practical demand for secure low-latency communication. For a long time, it was unclear whether protocols that simultaneously achieve 0-RTT and full forward secrecy exist. Only recently, the first forward-secret 0-RTT protocol was described by Gunther et al. (Eurocrypt, 2017). It is based on puncturable encryption. Forward secrecy is achieved by “puncturing” the secret key after each decryption operation, such that a given ciphertext can only be decrypted once (cf. also Green and Miers, S&P 2015). Unfortunately, their scheme is completely impractical, since one puncturing operation takes between 30 s and several minutes for reasonable security and deployment parameters, such that this solution is only a first feasibility result, but not efficient enough to be deployed in practice. In this paper, we introduce a new primitive that we term Bloom filter encryption (BFE), which is derived from the probabilistic Bloom filter data structure. We describe different constructions of BFE schemes and show how these yield new puncturable encryption mechanisms with extremely efficient puncturing. Most importantly, a puncturing operation only involves a small number of very efficient computations, plus the deletion of certain parts of the secret key, which outperforms previous constructions by orders of magnitude. This gives rise to the first forward-secret 0-RTT protocols that are efficient enough to be deployed in practice. We believe that BFE will find applications beyond forward-secret 0-RTT protocols.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of subverting the CRS model and show that it is impossible to achieve subversion soundness and zero knowledge at the same time.
Abstract: While NIZK arguments in the CRS model are widely studied, the question of what happens when the CRS is subverted has received little attention. In ASIACRYPT 2016, Bellare, Fuchsbauer, and Scafuro showed the first negative and positive results, proving also that it is impossible to achieve subversion soundness and (even non-subversion) zero knowledge at the same time. On the positive side, they constructed a sound and subversion-zero knowledge (Sub-ZK) non-succinct NIZK argument for NP. We consider the practically very relevant case of zk-SNARKs. We make Groth’s zk-SNARK for Circuit-SAT from EUROCRYPT 2016 computationally knowledge-sound and perfectly composable Sub-ZK with minimal changes. We only require the CRS trapdoor to be extractable and the CRS to be publicly verifiable. To achieve the latter, we add some new elements to the CRS and construct an efficient CRS verification algorithm. We also provide a definitional framework for knowledge-sound and Sub-ZK SNARKs.

Journal ArticleDOI
TL;DR: In this paper, a proof system for the decisional version of the bounded distance decoding problem was constructed for all ciphertexts with oblivious ciphertext sampling (POCS) access to an untrusted prover.
Abstract: Non-interactive zero-knowledge ( $$\mathsf {NIZK}$$ ) is a fundamental primitive that is widely used in the construction of cryptographic schemes and protocols. Our main result is a reduction from constructing $$\mathsf {NIZK}$$ proof systems for all of $$\mathbf {NP}$$ based on $$\mathsf {LWE}$$ , to constructing a $$\mathsf {NIZK}$$ proof system for a particular computational problem on lattices, namely a decisional variant of the bounded distance decoding ( $$\mathsf {BDD}$$ ) problem. That is, we show that assuming $$\mathsf {LWE}$$ , every language $$L \in \mathbf {NP}$$ has a $$\mathsf {NIZK}$$ proof system if (and only if) the decisional $$\mathsf {BDD}$$ problem has a $$\mathsf {NIZK}$$ proof system. This (almost) confirms a conjecture of Peikert and Vaikuntanathan (CRYPTO, 2008). To construct our $$\mathsf {NIZK}$$ proof system, we introduce a new notion that we call prover-assisted oblivious ciphertext sampling ( $$\mathsf {POCS}$$ ), which we believe to be of independent interest. This notion extends the idea of oblivious ciphertext sampling, which allows one to sample ciphertexts without knowing the underlying plaintext. Specifically, we augment the oblivious ciphertext sampler with access to an (untrusted) prover to help it accomplish this task. We show that the existence of encryption schemes with a $$\mathsf {POCS}$$ procedure, as well as some additional natural requirements, suffices for obtaining $$\mathsf {NIZK}$$ proofs for $$\mathbf {NP}$$ . We further show that such encryption schemes can be instantiated based on $$\mathsf {LWE}$$ , assuming the existence of a $$\mathsf {NIZK}$$ proof system for the decisional $$\mathsf {BDD}$$ problem.

Journal ArticleDOI
TL;DR: In this article, the first constructions of public key quantum money from several cryptographic assumptions were presented, and several new techniques including a new precise variant of the no-cloning theorem were developed.
Abstract: Public key quantum money can be seen as a version of the quantum no-cloning theorem that holds even when the quantum states can be verified by the adversary. In this work, we investigate quantum lightning, a formalization of “collision-free quantum money” defined by Lutomirski et al. [ICS’10], where no-cloning holds even when the adversary herself generates the quantum state to be cloned. We then study quantum money and quantum lightning, showing the following results: Thus, we provide the first constructions of public key quantum money from several cryptographic assumptions. Along the way, we develop several new techniques including a new precise variant of the no-cloning theorem.

Journal ArticleDOI
TL;DR: Deoxys-TBC as mentioned in this paper uses a new family of tweakable block ciphers as internal primitive, which follows the TWEAKEY framework and relies on the AES round function.
Abstract: We present the Deoxys family of authenticated encryption schemes, which consists of Deoxys-I and Deoxys-II. Both are nonce-based authenticated encryption schemes with associated data and have either 128- or 256-bit keys. Deoxys-I is similar to OCB: It is single-pass but insecure when nonces are repeated; in contrast, Deoxys-II is nonce-misuse resistant. Deoxys-II was selected as first choice in the final portfolio of the CAESAR competition for the defense-in-depth category. Deoxys uses a new family of tweakable block ciphers as internal primitive, Deoxys-TBC, which follows the TWEAKEY framework (Jean, Nikolic, and Peyrin, ASIACRYPT 2014) and relies on the AES round function. Our benchmarks indicate that Deoxys does not sacrifice efficiency for security and performs very well both in software (e.g., Deoxys-I efficiency is similar to AES-GCM) and hardware.


Journal ArticleDOI
TL;DR: In this paper, the authors consider the question of whether PPAD hardness can be based on standard cryptographic assumptions, such as the existence of one-way functions or public-key encryption.
Abstract: We consider the question of whether PPAD hardness can be based on standard cryptographic assumptions, such as the existence of one-way functions or public-key encryption. This question is particularly well-motivated in light of new devastating attacks on obfuscation candidates and their underlying building blocks, which are currently the only known source for PPAD hardness.

Journal ArticleDOI
TL;DR: In this article, the authors revisited the results by Degwekar, Vaikuntanathan, and Vasudevan in Crypto 2016 on fine-grained cryptography and showed constructions of three key fundamental finegrained cryptographic primitives: one-way permutation families, hash proof systems (which in turn implies a public-key encryption scheme against chosen chiphertext attacks), and trapdoor oneway functions.
Abstract: Fine-grained cryptographic primitives are secure against adversaries with bounded resources and can be computed by honest users with less resources than the adversaries. In this paper, we revisit the results by Degwekar, Vaikuntanathan, and Vasudevan in Crypto 2016 on fine-grained cryptography and show constructions of three key fundamental fine-grained cryptographic primitives: one-way permutation families, hash proof systems (which in turn implies a public-key encryption scheme against chosen chiphertext attacks), and trapdoor one-way functions. All of our constructions are computable in $$\textsf {NC}^1$$ and secure against (non-uniform) $$\textsf {NC}^1$$ circuits under the widely believed worst-case assumption $$\textsf {NC}^1\subsetneq {\oplus \textsf {L/poly}}$$ .

Journal ArticleDOI
TL;DR: In this article, the authors proposed a framework to evaluate the impact of data prefetchers in time-driven cache attacks using a metric based on the Kullback-Leibler transformation.
Abstract: Formally bounding side-channel leakage is important to bridge the gap between theory and practice in cryptography. However, bounding side-channel leakages is difficult because leakage in a cryptosystem could be from several sources. Moreover, the amount of leakage from a source may vary depending on the implementation of the cipher and the form of attack. To formally analyze the security of a cryptosystem, it is therefore essential to consider each source of leakage independently. This paper considers data prefetching, which is used in most modern day cache memories to reduce miss penalty. We build a framework that would help computer architects theoretically gauge the impact of a data prefetcher in time-driven cache attacks early in the design phase. The framework computes leakage due to the prefetcher using a metric that is based on the Kullback–Leibler transformation. We use the framework to analyze two commonly used prefetching algorithms, namely sequential and arbitrary-stride prefetching. These form the basis of several other prefetching algorithms. We also demonstrate its use by designing a new prefetching algorithm called even–odd prefetcher that does not have leakage in time-driven cache attacks.

Journal ArticleDOI
TL;DR: In this article, the authors studied the security of both the permutation and the constructions that are based on it and provided a linear distinguisher on the full permutation of complexity.
Abstract: $$\mathsf {Gimli}$$ is a family of cryptographic primitives (both a hash function and an AEAD scheme) that has been selected for the second round of the NIST competition for standardizing new lightweight designs. The candidate $$\mathsf {Gimli}$$ is based on the permutation $$\mathsf {Gimli}$$ , which was presented at CHES 2017. In this paper, we study the security of both the permutation and the constructions that are based on it. We exploit the slow diffusion in $$\mathsf {Gimli}$$ and its internal symmetries to build, for the first time, a distinguisher on the full permutation of complexity $$2^{64}$$ . We also provide a practical distinguisher on 23 out of the full 24 rounds of $$\mathsf {Gimli}$$ that has been implemented. Next, we give (full state) collision and semi-free start collision attacks on $$\mathsf {Gimli}$$ -Hash, reaching, respectively, up to 12 and 18 rounds. On the practical side, we compute a collision on 8-round $$\mathsf {Gimli}$$ -Hash. In the quantum setting, these attacks reach 2 more rounds. Finally, we perform the first study of linear trails in $$\mathsf {Gimli}$$ , and we find a linear distinguisher on the full permutation.

Journal ArticleDOI
TL;DR: In this article, the authors consider the impact of ownership structure as defined by shares of individuals' versus institutional ownership and find that new companies with majority individual owners and minority institutional owners outperform others as firms age.
Abstract: Total factor productivity represents a dimension of output that cannot be attributed to factors of production; it is unique to the firm, and central to its competitiveness We posit that ownership structure plays a key role in productivity However, ownership structure has an effect on productivity which changes with a firm’s age Ownership structure that is optimal for new firms may not be optimal for older firms We here consider the impact of ownership structure as defined by shares of individuals’ versus (broadly defined) institutional ownership Our empirical counterpart draws on the UK company data for 2008-2017, obtained from the Orbis database Our key explanatory variables are the joint share of the individual owners in equity and its square, but we also control concentration of ownership indices, along with a range of other firm-level characteristics Applying fixed effects models along with instrumenting ownership with regional level variables, we found that new companies with majority individual owners and minority institutional owners outperform others As firms age, however, these differences begin to disappear, with individual owners losing their control-related advantage The results of the relative advantage of individual owners in the early stage is consistent with the property rights theory, which emphasises that residual control rights should remain with those whose investment is critical It can be argued, however, that for whom the investment is critical changes as firm ages Our managerial implications emphasise ownership competence in optimising ownership structure, which should evolve along the stages of the life-cycle of the firm