scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Cryptology in 2014"


Journal ArticleDOI
TL;DR: The effect of RC4 keylength on its keystream is investigated, and significant biases involving the length of the secret key are reported, and the existence of positive biases towards zero for all the initial bytes 3 to 255 is proved and exploited towards a generalized broadcast attack on RC4.
Abstract: RC4 has been the most popular stream cipher in the history of symmetric key cryptography. Its internal state contains a permutation over all possible bytes from 0 to 255, and it attempts to generate a pseudo-random sequence of bytes (called keystream) by extracting elements of this permutation. Over the last twenty years, numerous cryptanalytic results on RC4 stream cipher have been published, many of which are based on non-random (biased) events involving the secret key, the state variables, and the keystream of the cipher. Though biases based on the secret key are common in RC4 literature, none of the existing ones depends on the length of the secret key. In the first part of this paper, we investigate the effect of RC4 keylength on its keystream, and report significant biases involving the length of the secret key. In the process, we prove the two known empirical biases that were experimentally reported and used in recent attacks against WEP and WPA by Sepehrdad, Vaudenay and Vuagnoux in EUROCRYPT 2011. After our current work, there remains no bias in the literature of WEP and WPA attacks without a proof. In the second part of the paper, we present theoretical proofs of some significant initial-round empirical biases observed by Sepehrdad, Vaudenay and Vuagnoux in SAC 2010. In the third part, we present the derivation of the complete probability distribution of the first byte of RC4 keystream, a problem left open for a decade since the observation by Mironov in CRYPTO 2002. Further, the existence of positive biases towards zero for all the initial bytes 3 to 255 is proved and exploited towards a generalized broadcast attack on RC4. We also investigate for long-term non-randomness in the keystream, and prove a new long-term bias of RC4.

74 citations


Journal ArticleDOI
TL;DR: The construction guarantees full simulation in the presence of malicious, polynomial-time adversaries (assuming the hardness of DDH assumption) and exhibits computation and communication costs of O(n+m) group elements in a constant round complexity.
Abstract: We propose a protocol for the problem of secure two-party pattern matching, where Alice holds a text t?{0,1}? of length n, while Bob has a pattern p?{0,1}? of length m. The goal is for Bob to (only) learn where his pattern occurs in Alice's text, while Alice learns nothing. Private pattern matching is an important problem that has many applications in the area of DNA search, computational biology and more. Our construction guarantees full simulation in the presence of malicious, polynomial-time adversaries (assuming the hardness of DDH assumption) and exhibits computation and communication costs of O(n+m) group elements in a constant round complexity. This improves over previous work by Gennaro et al. (Public Key Cryptography, pp. 145---160, 2010) whose solution requires overhead of O(nm) group elements and exponentiations in O(m) rounds. In addition to the above, we propose a collection of protocols for important variations of the secure pattern matching problem that are significantly more efficient than the current state of art solutions: First, we deal with secure pattern matching with wildcards. In this variant the pattern may contain wildcards that match both 0 and 1. Our protocol requires O(n+m) communication and O(1) rounds using O(nm) computation. Then we treat secure approximate pattern matching. In this variant the matches may be approximated, i.e., have Hamming distance less than some threshold, ?. Our protocol requires O(n?) communication in O(1) rounds using O(nm) computation. Third, we have secure pattern matching with hidden pattern length. Here, the length, m, of Bob's pattern remains a secret. Our protocol requires O(n+M) communication in O(1) rounds using O(n+M) computation, where M is an upper bound on m. Finally, we have secure pattern matching with hidden text length. Finally, in this variant the length, n, of Alice's text remains a secret. Our protocol requires O(N+m) communication in O(1) rounds using O(N+m) computation, where N is an upper bound on n.

67 citations


Journal ArticleDOI
TL;DR: A new type of attack is described called a sandwich attack, and it is used to construct a simple related-key distinguisher for 7 of the 8 rounds of KASUMI with an amazingly high probability of 2−14, which indicates that the modifications made by ETSI’s SAGE group in moving from MISTY to KASumI made it extremely weak when related- key attacks are allowed.
Abstract: Over the last 20 years, the privacy of most GSM phone conversations was protected by the A5/1 and A5/2 stream ciphers, which were repeatedly shown to be cryptographically weak. They are being replaced now by the new A5/3 and A5/4 algorithms, which are based on the block cipher KASUMI. In this paper we describe a new type of attack called a sandwich attack, and use it to construct a simple related-key distinguisher for 7 of the 8 rounds of KASUMI with an amazingly high probability of 2?14. By using this distinguisher and analyzing the single remaining round, we can derive the complete 128-bit key of the full KASUMI with a related-key attack which uses only 4 related keys, 226 data, 230 bytes of memory, and 232 time. These completely practical complexities were experimentally verified by performing the attack in less than two hours on a single-core of a PC. Interestingly, neither our technique nor any other published attack can break the original MISTY block cipher (on which KASUMI is based) significantly faster than exhaustive search. Our results thus indicate that the modifications made by ETSI's SAGE group in moving from MISTY to KASUMI made it extremely weak when related-key attacks are allowed, but do not imply anything about its resistance to single-key attacks. Consequently, there is no indication that the way KASUMI is implemented in GSM and 3G networks is practically vulnerable in any realistic attack model.

65 citations


Journal ArticleDOI
TL;DR: Barak et al. as discussed by the authors studied the problem of best-possible obfuscation, which guarantees that any information that is not hidden by the obfuscated program is also not hidden in any other similar-size program computing the same functionality and thus the obfuscation is the best possible.
Abstract: An obfuscator is a compiler that transforms any program (which we will view in this work as a boolean circuit) into an obfuscated program (also a circuit) that has the same input-output functionality as the original program, but is "unintelligible". Obfuscation has applications for cryptography and for software protection. Barak et al. (CRYPTO 2001, pp. 1---18, 2001) initiated a theoretical study of obfuscation, which focused on black-box obfuscation, where the obfuscated circuit should leak no information except for its (black-box) input-output functionality. A family of functionalities that cannot be obfuscated was demonstrated. Subsequent research has showed further negative results as well as positive results for obfuscating very specific families of circuits, all with respect to black box obfuscation. This work is a study of a new notion of obfuscation, which we call best-possible obfuscation. Best possible obfuscation makes the relaxed requirement that the obfuscated program leaks as little information as any other program with the same functionality (and of similar size). In particular, this definition allows the program to leak information that cannot be obtained from a black box. Best-possible obfuscation guarantees that any information that is not hidden by the obfuscated program is also not hidden by any other similar-size program computing the same functionality, and thus the obfuscation is (literally) the best possible. In this work we study best-possible obfuscation and its relationship to previously studied definitions. Our main results are: (1) A separation between black-box and best-possible obfuscation. We show a natural obfuscation task that can be achieved under the best-possible definition, but cannot be achieved under the black-box definition. (2) A hardness result for best-possible obfuscation, showing that strong (information-theoretic) best-possible obfuscation implies a collapse in the Polynomial-Time Hierarchy. (3) An impossibility result for efficient best-possible (and black-box) obfuscation in the presence of random oracles. This impossibility result uses a random oracle to construct hard-to-obfuscate circuits, and thus it does not imply impossibility in the standard model.

59 citations


Journal ArticleDOI
TL;DR: In this article, the authors defined the characteristics of each part of a business model, i.e., customers, distribution, value, resources, activities, cost and revenue, and discussed the most used characteristics, extremes, discrepancies and the most important facts which were detected in their research.
Abstract: The term business model has been used in practice for few years, but companies create, define and innovate their models subconsciously from the start of business. Our paper is aimed to clear the theory about business model, hence definition and all the components that form each business. In the second part, we create an analytical tool and analyze the real business models in Slovakia and define the characteristics of each part of business model, i.e., customers, distribution, value, resources, activities, cost and revenue. In the last part of our paper, we discuss the most used characteristics, extremes, discrepancies and the most important facts which were detected in our research.

55 citations


Journal ArticleDOI
TL;DR: In this paper, the authors define and compare current trends within business risks among small and medium enterprises in selected regions of the Czech Republic and Slovakia in the context of entrepreneurial optimism, and show that the most important business risk is still market risk, followed by financial and eventually personal risk.
Abstract: The aim of this article is to define and compare current trends within business risks among small and medium enterprises in selected regions of the Czech Republic and Slovakia in the context of entrepreneurial optimism. In 2013, the research on entrepreneurs’ opinions in the Zlin Region (Czech Republic) and Žilina Region (Slovakia) was conducted. These regions have similar economic parameters and are separated by only a few miles. According to our research, it can be stated that during the period of financial crisis, the situation in the SME business deteriorated significantly with declining performance and profitability of Czech and Slovak small and medium enterprises. The most important business risk is still market risk, followed by financial and eventually personal risk. Our research showed that the profitability of Czech and Slovak small and medium enterprises decreased by 15%. Despite these facts, the level of entrepreneurial optimism among SME in the selected regions of the Czech Republic and Slovakia is very high.

51 citations


Journal ArticleDOI
TL;DR: In this paper, the Virtual Grey Box (VGB) property is relaxed to allow the simulator unbounded computation time, while still allowing only polynomially many queries to the oracle.
Abstract: The Virtual Black Box (VBB) property for program obfuscators provides a strong guarantee: anything computable by an efficient adversary, given the obfuscated program, can also be computed by an efficient simulator, with only oracle access to the program. However, we know how to achieve this notion only for very restricted classes of programs. This work studies a simple relaxation of VBB: allow the simulator unbounded computation time, while still allowing only polynomially many queries to the oracle. We demonstrate the viability of this relaxed notion, which we call Virtual Grey Box (VGB), in the context of composable obfuscators for point programs: it is known that, with respect to VBB, if such obfuscators exist, then there exist multi-bit point obfuscators (also known as "digital lockers") and subsequently also very strong variants of encryption that are resilient to various attacks, such as key leakage and key-dependent-messages. However, no composable VBB-obfuscators for point programs have been shown. We show composable VGB-obfuscators for point programs under a strong variant of the Decision Diffie---Hellman assumption. We show that VGB (instead of VBB) obfuscation still suffices for the above applications, as well as for new applications. This includes extensions to the public key setting and to encryption schemes with resistance to certain related key attacks (RKA).

48 citations


Journal ArticleDOI
TL;DR: This paper defines multi-string non-interactive zero-knowledge proofs and proves that they exist under general cryptographic assumptions and suggests a universally composable commitment scheme in the multi- string model, where it has been proven that UC commitment does not exist in the plain model without setup assumptions.
Abstract: The common random string model introduced by Blum, Feldman, and Micali permits the construction of cryptographic protocols that are provably impossible to realize in the standard model. We can think of this model as a trusted party generating a random string and giving it to all parties in the protocol. However, the introduction of such a third party should set alarm bells going off: Who is this trusted party? Why should we trust that the string is random? Even if the string is uniformly random, how do we know it does not leak private information to the trusted party? The very point of doing cryptography in the first place is to prevent us from trusting the wrong people with our secrets. In this paper, we propose the more realistic multi-string model. Instead of having one trusted authority, we have several authorities that generate random strings. We do not trust any single authority; we only assume a majority of them generate random strings honestly. Our results also hold even if different subsets of these strings are used in different instances, as long as a majority of the strings used at any particular invocation is honestly generated. This security model is reasonable and at the same time very easy to implement. We could for instance imagine random strings being provided on the Internet, and any set of parties that want to execute a protocol just need to agree on which authorities' strings they want to use. We demonstrate the use of the multi-string model in several fundamental cryptographic tasks. We define multi-string non-interactive zero-knowledge proofs and prove that they exist under general cryptographic assumptions. Our multi-string NIZK proofs have very strong security properties such as simulation-extractability and extraction zero-knowledge, which makes it possible to compose them with arbitrary other protocols and to reuse the random strings. We also build efficient simulation-sound multi-string NIZK proofs for circuit satisfiability based on groups with a bilinear map. The sizes of these proofs match the best constructions in the single common random string model. We also suggest a universally composable commitment scheme in the multi-string model. It has been proven that UC commitment does not exist in the plain model without setup assumptions. Prior to this work, constructions were only known in the common reference string model and the registered public key model. The UC commitment scheme can be used in a simple coin-flipping protocol to create a uniform random string, which in turn enables the secure realization of any multi-party computation protocol.

47 citations


Journal ArticleDOI
TL;DR: In this article, the effect of electric power fluctuations on the profitability and competitiveness of SMEs, using SMEs operating within the Accra business district of Ghana as a case study, was analyzed.
Abstract: The economy of Ghana has attained a middle-income status and is seeking to advance; hence, an analysis of the economy based on the supply chain management of energy is significant to provide the quantitative results and comprehensive information about how and where the en- ergy use affects economic growth and development. This information is necessary to enable the government to respond promptly with measures that will improve the supply of energy to ensure the profitability and competitiveness of firms. The objective of this paper is to analyse the effect of electric power fluctuations on the profitability and competitiveness of SMEs, using SMEs operating within the Accra business district of Ghana as a case study. This research is a cross- sectional survey and it adopted a mixed method approach. A sample of 70 Ghanaian SMEs was selected using a systematic sampling approach. Inclusion criterion for the selection of the SMEs was their location within the business district of Accra as well as their use of electricity in their main business operation. Data was collected with an interviewer-administered structured ques- tionnaire which focused on the effect of power fluctuation on the operations of SMEs, especially on the profitability and its resulting effect on the firms' competitiveness. The SPSS statistical package was used to group and analyse the data. The study is a single-factor analysis of the exog- enous problems facing the Small and Medium Enterprise sector. The study found that without reliable energy supply, SMEs are unable to produce in increased quantities and quality leading to poor sales hence low levels of profitability. It is established that low profitability negatively affects Return on Assets (ROA) and Return on Investment (ROI) of SMEs. Consequently, if the level of profitability is high, it is expected that ROA and ROI will be high and vice versa. With high profits, SMEs are able to increase their competitiveness.

46 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduced theoretical aspects of CSR in commercial banking and measured the level of corporate social responsibility in selected Czech commercial banks, and the CSR index was calculated for Ceska spořitelna, Ceskoslovenska obchodni banka, Komercni bankas, and GE Money Bank.
Abstract: The concept of corporate social responsibility is not new in the banking sector, but nowadays, it becomes highly topical since the crisis has significantly highlighted the need for integration of moral principles in the banking business. Knowledge of business practice indicates the fact that the acceptance of moral principles in business is not integrated into management decisions of companies. It also cannot be accepted that self-regulatory instruments of companies such as CSR will be effective. The existing experience with the implementation of CSR and ethical principles in the banking sector leads to the opinion that the social responsibility of banks and ethics in banking sector are perceived as an appropriate marketing tool for public communication and are not integrated into policies of individual commercial banks. Experience with the crisis demonstrated that there is a lack of moral principles of managers’ decisions. The aim of this article is to introduce theoretical aspects of CSR in commercial banking and measure the level of CSR in selected Czech commercial banks. In the article, the CSR index was calculated for Ceska spořitelna, Ceskoslovenska obchodni banka, Komercni banka and GE Money Bank. Results of our research confirmed that the CSR index of selected Czech commercial banks achieves only an average level. There is a significant lack of transparent information in the context of CSR areas.

45 citations


Journal ArticleDOI
TL;DR: A relation between the notions of verifiable random functions (VRFs) and identity-based key encapsulation mechanisms (IB-KEMs) is shown and a direct construction of VRFs from VRF-suitable IB-K EMs is proposed.
Abstract: In this paper we show a relation between the notions of verifiable random functions (VRFs) and identity-based key encapsulation mechanisms (IB-KEMs). In particular, we propose a class of IB-KEMs that we call VRF-suitable, and we propose a direct construction of VRFs from VRF-suitable IB-KEMs. Informally, an IB-KEM is VRF-suitable if it provides what we call unique decapsulation (i.e., given a ciphertext C produced with respect to an identity ID, all the secret keys corresponding to identity ID?, decapsulate to the same value, even if ID?ID?), and it satisfies an additional property that we call pseudo-random decapsulation. In a nutshell, pseudo-random decapsulation means that if one decapsulates a ciphertext C, produced with respect to an identity ID, using the decryption key corresponding to any other identity ID?, the resulting value looks random to a polynomially bounded observer. Our construction is of interest both from a theoretical and a practical perspective. Indeed, apart from establishing a connection between two seemingly unrelated primitives, our methodology is direct in the sense that, in contrast to most previous constructions, it avoids the inefficient Goldreich---Levin hardcore bit transformation. As an additional contribution, we propose a new VRF-suitable IB-KEM based on the decisional l-weak Bilinear Diffie---Hellman Inversion assumption. Interestingly, when applying our transformation to this scheme, we obtain a new VRF construction that is secure under the same assumption, and it efficiently supports a large input space.

Journal ArticleDOI
TL;DR: This paper shows how to take advantage of some symmetries of twisted Edwards and twisted Jacobi intersections curves to gain an exponential factor 2ω(n−1) to solve the corresponding PDP where ω is the exponent in the complexity of multiplying two dense matrices.
Abstract: In 2004, an algorithm is introduced to solve the DLP for elliptic curves defined over a non-prime finite field $\mathbb{F}_{q^{n}}$ . One of the main steps of this algorithm requires decomposing points of the curve $E(\mathbb{F}_{q^{n}})$ with respect to a factor base, this problem is denoted PDP. In this paper, we will apply this algorithm to the case of Edwards curves, the well-known family of elliptic curves that allow faster arithmetic as shown by Bernstein and Lange. More precisely, we show how to take advantage of some symmetries of twisted Edwards and twisted Jacobi intersections curves to gain an exponential factor 2 ?(n?1) to solve the corresponding PDP where ? is the exponent in the complexity of multiplying two dense matrices. Practical experiments supporting the theoretical result are also given. For instance, the complexity of solving the ECDLP for twisted Edwards curves defined over $\mathbb{F}_{q^{5}}$ , with q?264, is supposed to be ~ 2160 operations in $E(\mathbb{F}_{q^{5}})$ using generic algorithms compared to 2130 operations (multiplications of two 32-bits words) with our method. For these parameters the PDP is intractable with the original algorithm. The main tool to achieve these results relies on the use of the symmetries and the quasi-homogeneous structure induced by these symmetries during the polynomial system solving step. Also, we use a recent work on a new algorithm for the change of ordering of a Grobner basis which provides a better heuristic complexity of the total solving process.

Journal ArticleDOI
TL;DR: In this article, the authors examined the relationship between organizational citizenship behaviour, hospital corporate image and performance and found that hospitals can increase performance through organizational citizenship behavior and positive corporate image, however, it was also discovered that there is a negative covariance between organisational citizenship behaviour and corporate image despite their individual positive contribution to performance.
Abstract: This study examines the relationship between organizational citizenship behaviour, hospital corporate image and performance. Questionnaires were distributed to 350 patients and 298 usa- ble questionnaires were returned representing a return rate of 85.7%. The study employs a Struc- tural Equation Model to test four hypotheses on organizational citizenship behaviours, hospital corporate image and performance. The findings reveal that hospitals can increase performance through organizational citizenship behaviour and positive corporate image. However, it was also discovered that there is a negative covariance between organizational citizenship behav- iour and hospital corporate image despite their individual positive contribution to performance. Therefore, hospital management should develop an organizational climate (such as recognition, additional reward, promotion, etc.) that can promote organizational citizenship behaviour and enhance a positive corporate image while preventing situations that will discourage staff from rendering extra positive discretionary work related services.

Journal ArticleDOI
TL;DR: In this paper, the authors explored the ways in which student satisfaction can be achieved with the use of customer relationship management and found that student's willingness to recommend to others increases when the student lifecycle in the university is well managed.
Abstract: The primary objective of the article was to determine the relationship between customer rela- tionship management and student satisfaction. The study explored the ways in which student satisfaction can be achieved with the use of customer relationship management. Both descriptive and inferential statistics were employed in this research. The following hypotheses were formu- lated in this study: Student Lifecycle management has a significant impact on the student's will- ingness to recommend to others, Parent relationship management has a positive impact on the students' willingness to recommend their universities to others. A multiple regression analysis was employed in the hypothesis testing. The research findings showed that student's willingness to recommend to others increases when the student lifecycle in the university is well managed. It was also discovered that strong parent relationship management at the University enhances the student's willingness to recommend their Universities to others. It is therefore recommended that Universities should adopt effective customer relationship management strategies to achieve student satisfaction.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the concept of work-life balance (WLB) policies and practices in three sectors of the Nigerian economy namely the Banking, Educational and Power Sector.
Abstract: The study investigates the concept of work-life balance (WLB) policies and practices in three sectors of the Nigerian Economy namely the Banking, Educational and Power Sector. The types of WLB initiatives available in the three sectors were explored and the barriers to implementa- tion of the WLB initiatives were identified. This research implored quantitative methods to investigate the work-life balance practices in three sectors of the Nigerian Economy. This was achieved using an in-depth case study analysis of these sectors. The data set comprised of re- sponses from both managers and employees in the Banking sector with five hundred and eighty six copies of the questionnaire retrieved. The Educational sector comprised of both managers and employees with five hundred and thirty one copies of the questionnaire retrieved; while five hundred and seven copies retrieved from the Power Sector. The findings reveal that there is di- versity in terms of how respondents perceive the concept of Work-Life Balance. There is a wide gap between corporate WLB practices and employees' understanding of the concept; the paper suggests some policy implications which would aid the implementation of WLB policies in the studied sectors. This study also suggests direction for future research.

Journal ArticleDOI
TL;DR: This work proves the first generic KDM amplification theorem which relies solely on the KDM security of the underlying scheme without making any other assumptions, and shows that an elementary form of K DM security against functions in which each output bit either copies or flips a single bit of the key can be amplified into KDMSecurity with respect to any function family that can be computed in arbitrary fixed polynomial-time.
Abstract: Key-dependent message (KDM) secure encryption schemes provide secrecy even when the attacker sees encryptions of messages related to the secret-key sk. Namely, the scheme should remain secure even when messages of the form f(sk) are encrypted, where f is taken from some function class $\mathcal{F}$ . A KDM amplification procedure takes an encryption scheme which satisfies $\mathcal{F}$ -KDM security, and boosts it into a $\mathcal{G}$ -KDM secure scheme, where the function class $\mathcal{G}$ should be richer than $\mathcal{F}$ . It was recently shown by Brakerski et al. (TCC 2011) and Barak et al. (EUROCRYPT 2010) that a strong form of amplification is possible, provided that the underlying encryption scheme satisfies some special additional properties. In this work, we prove the first generic KDM amplification theorem which relies solely on the KDM security of the underlying scheme without making any other assumptions. Specifically, we show that an elementary form of KDM security against functions in which each output bit either copies or flips a single bit of the key (a.k.a. projections) can be amplified into KDM security with respect to any function family that can be computed in arbitrary fixed polynomial-time. Furthermore, our amplification theorem and its proof are insensitive to the exact setting of KDM security, and they hold in the presence of multiple-keys and in the symmetric-key/public-key and the CPA/CCA cases. As a result, we can amplify the security of most known KDM constructions, including ones that could not be amplified before. Finally, we study the minimal conditions under which full-KDM security (with respect to all functions) can be achieved. We show that under strong notion of KDM security, the existence of fully homomorphic encryption which allows to encrypt the secret-key (i.e., "cyclic-secure") is not only sufficient for full-KDM security, as shown by Barak et al., but also necessary. On the other hand, we observe that for standard KDM security, this condition can be relaxed by adopting Gentry's bootstrapping technique (STOC 2009) to the KDM setting.

Journal ArticleDOI
TL;DR: In this article, the authors combine two powerful methods of symmetric cryptanalysis: rotational cryptanalysis and the rebound attack, and apply their new compositional attack to the reduced version of the hash function Skein, a finalist of the SHA-3 competition.
Abstract: In this paper we combine two powerful methods of symmetric cryptanalysis: rotational cryptanalysis and the rebound attack. Rotational cryptanalysis was designed for the analysis of bit-oriented designs like ARX (Addition-Rotation-XOR) schemes. It has been applied to several hash functions and block ciphers, including the new standard SHA-3 (Keccak). The rebound attack is a start-from-the-middle approach for finding differential paths and conforming pairs in byte-oriented designs like Substitution-Permutation networks and AES. We apply our new compositional attack to the reduced version of the hash function Skein, a finalist of the SHA-3 competition. Our attack penetrates more than two thirds of the Skein core--the cipher Threefish, and made the designers to change the submission in order to prevent it. The rebound part of our attack has been significantly enhanced to deliver results on the largest number of rounds. We also use neutral bits and message modification methods from the practice of collision search in MD5 and SHA-1 hash functions. These methods push the rotational property through more rounds than previous analysis suggested, and eventually establish a distinguishing property for the reduced Threefish cipher. We formally prove that such a property cannot be found for an ideal cipher within the complexity limits of our attack. The complexity estimates are supported by extensive experiments.

Journal ArticleDOI
TL;DR: In this paper, the authors deal with small and medium enterprises in relation to the attitudes perceived by business owners in their immediate neighbourhood, society, in relation with banks and the government, and the difference between entrepreneurs who started their businesses voluntarily and those who entered the business out of necessity.
Abstract: This paper deals with small and medium enterprises in relation to the attitudes perceived by business owners in their immediate neighbourhood, society, in relation to banks and the government. The key question is the difference between entrepreneurs who started their businesses voluntarily and those who entered the business out of necessity. The majority of governmental policies, including Czech policies, focus more on the questions of financial support, however the support of entrepreneurs can be broader and may include the efforts to influence perception of an individual and the society so that they have a more positive attitude towards entrepreneurial activities. The attitudes in the Czech Republic are so far rather negative and such a change may be positively reflected in the level of the entrepreneurial activity which strongly affects economic development.

Journal ArticleDOI
TL;DR: In this article, the authors explored the impact of consumer decision-making styles on their preference towards domestic brands in the context of the Czech Republic using the Con- sumer Style Inventory (CSI).
Abstract: The modern marketer shows a growing interest in the research of consumer decision-making styles to understand how an individual makes his/her buying decisions in the competitive envi- ronment. This concept is important because it determines the behavioral patterns of consumers and is relevant for market segmentation. Most of the previous researchers have adapted to Con- sumer Style Inventory (CSI) introduced by Sproles and Kendall in 1986 as a common tool for assessing the decision-making styles of customers. Though researchers have validated CSI in dif- ferent cultural and social contexts, very limited studies were carried out to explore the relation- ship between consumer decision-making styles and their domestic brand biasness. Therefore, the present study mainly focuses on exploring the impact of consumer decision-making styles on their preference towards domestic brands in the context of the Czech Republic. The sample for this study was drawn from adult customers who live in the Brno, Zlin, and Olomouc regions in the Czech Republic. A group of students from the Bachelor's degree programme in Management and Economics, Tomas Bata University in Zlin were selected as enumerators for data collection. Altogether 200 questionnaires were distributed and 123 completed questionnaires were taken in for final analysis. The decision- making styles were measured using Sproles and Kendall's (1986) CSI instrument. Cronbach's Alpha values of each construct confirmed that there is a good interring reliability associated with the data. Principle Component Analysis was employed to de- termine the decision-making styles of Czech customers and the one-way ANOVA was used for testing hypotheses. The findings revealed that seven decision-making styles are appeared among Czech customers and fashion consciousness, recreational orientation, impulsiveness, and price consciousness of customers show a direct relationship with the domestic brand biasness. Other styles did not show a significant relationship with domestic brand preferences in the given con- text. Finally, the researchers provide some suggestions for domestic firms in the Czech Republic to develop appropriate marketing strategies for attracting customers towards domestic brands.

Journal ArticleDOI
TL;DR: Improved collision finding techniques are developed which enable us to double the number of Keccak rounds for which actual collisions were found, and this attack combines differential and algebraic techniques, and uses the fact that each round of KeCCak is only a quadratic mapping in order to efficiently find pairs of messages which follow a high probability differential characteristic.
Abstract: The Keccak hash function is the winner of NIST's SHA-3 competition, and so far it showed remarkable resistance against practical collision finding attacks: After several years of cryptanalysis and a lot of effort, the largest number of Keccak rounds for which actual collisions were found was only 2. In this paper, we develop improved collision finding techniques which enable us to double this number. More precisely, we can now find within a few minutes on a single PC actual collisions in the standard Keccak-224 and Keccak-256, where the only modification is to reduce their number of rounds to 4. When we apply our techniques to 5-round Keccak, we can get in a few days near collisions, where the Hamming distance is 5 in the case of Keccak-224 and 10 in the case of Keccak-256. Our new attack combines differential and algebraic techniques, and uses the fact that each round of Keccak is only a quadratic mapping in order to efficiently find pairs of messages which follow a high probability differential characteristic. Since full Keccak has 24 rounds, our attack does not threaten the security of the hash function.

Journal ArticleDOI
TL;DR: This work provides a more general and, in their eyes, simpler variant of Prabhakaran, Rosen and Sahai’s (FOCS ’02, pp. 366–375, 2002) analysis of the concurrent zero-knowledge simulation technique of Kilian and Petrank.
Abstract: We provide a more general and, in our eyes, simpler variant of Prabhakaran, Rosen and Sahai's (FOCS '02, pp. 366---375, 2002) analysis of the concurrent zero-knowledge simulation technique of Kilian and Petrank (STOC '01, pp. 560---569, 2001).

Journal ArticleDOI
TL;DR: In this article, Galbraith, Lin, and Scott extended the GLV method to scalar multiplication on elliptic curves over large prime characteristic fields for a variety of scenarios including side-channel protected and unprotected cases with sequential and multicore execution.
Abstract: The GLV method of Gallant, Lambert, and Vanstone (CRYPTO 2001) computes any multiple kP of a point P of prime order n lying on an elliptic curve with a low-degree endomorphism ? (called GLV curve) over $\mathbb{F}_{p}$ as $$kP = k_1P + k_2\varPhi(P) \quad\text{with } \max \bigl\{ |k_1|,|k_2| \bigr\} \leq C_1\sqrt{n} $$ for some explicit constant C 1>0. Recently, Galbraith, Lin, and Scott (EUROCRYPT 2009) extended this method to all curves over $\mathbb{F}_{p^{2}}$ which are twists of curves defined over $\mathbb{F}_{p}$ . We show in this work how to merge the two approaches in order to get, for twists of any GLV curve over $\mathbb{F}_{p^{2}}$ , a four-dimensional decomposition together with fast endomorphisms ?,? over $\mathbb{F}_{p^{2}}$ acting on the group generated by a point P of prime order n, resulting in a proven decomposition for any scalar k?[1,n] given by $$kP=k_1P+ k_2\varPhi(P)+ k_3\varPsi(P) + k_4\varPsi\varPhi(P) \quad \text{with } \max_i \bigl(|k_i| \bigr) 0. Remarkably, taking the best C 1,C 2, we obtain C 2/C 1<412, independently of the curve, ensuring in theory an almost constant relative speedup. In practice, our experiments reveal that the use of the merged GLV---GLS approach supports a scalar multiplication that runs up to 1.5 times faster than the original GLV method. We then improve this performance even further by exploiting the Twisted Edwards model and show that curves originally slower may become extremely efficient on this model. In addition, we analyze the performance of the method on a multicore setting and describe how to efficiently protect GLV-based scalar multiplication against several side-channel attacks. Our implementations improve the state-of-the-art performance of scalar multiplication on elliptic curves over large prime characteristic fields for a variety of scenarios including side-channel protected and unprotected cases with sequential and multicore execution.

Journal ArticleDOI
TL;DR: In this article, the authors examined the differences between results of studies focused on consumers' attitude toward advertising and found that the problem comes from the definition of AG, which is in some cases too broad.
Abstract: The paper examines based on international research the differences between results of studies focused on consumers' attitude toward advertising. The aim of this paper is to show that it is possible to find situations where the influence of attitudes towards specific ads in general (ASG) on attitudes toward advertising (Aad) can be observed and also it is possible to find no influence of attitudes toward ads in general (AG) on Aad. The paper shows that the problem comes from the definition of AG. The experiments described in this paper detect attitudinal differences toward advertising in general among studied nations depending on the type of advertising. The research encompasses respondents from three countries with different economic and cultural backgrounds (Germany, Ukraine and USA). The data were collected based on a quantitative survey and experiment among university students. The results show that the concept of AG is in some cases too broad. Differences between AG were confirmed between Ukraine and other countries. The respondents from Germany are according to AG more pessimistic and the respondents from the USA are more optimistic. This disparity was explained by a significant difference in Orthodox and Atheist religion compared to the other religions.

Journal ArticleDOI
TL;DR: In this paper, the effects of pre-purchase search motivation (PSM) on user attitudes toward social network sites (SNSs) have been identified, and the association between the attitudes toward SNA and users' banner ad-clicking behavior on SNSs was also assessed.
Abstract: Since last few years, social media have profoundly changed the ways of social and business communication. Particularly, social network sites (SNSs) have rapidly grown in popularity and number of users globally. They have become the main place for social interaction, discussion and communication. Today, businesses of various types use SNSs for commercial communication. Banner advertising is one of the common methods of commercial communication on SNSs. Advertising is a key source of revenue for many SNSs firms such as Facebook. In fact, the ex- istence of many SNSs owners and advertisers is contingent upon the success of social network advertising (SNA). Users demand free SNS services which makes SNA crucial for SNSs firms. SNA can be effective only if it is aligned with user motivations. Marketing literature identifies pre-purchase search as a primary consumer motivation for using media. The current study aims to identify the effects of pre-purchase search motivation (PSM) on user attitudes toward SNA. It also assesses the association between the attitudes toward SNA and users' banner ad-clicking behavior on SNSs. Data was gathered from 200 university students in Islamabad using offline survey. Results show positive effects of PSM on user attitudes toward SNA. They also show posi- tive association between user attitudes toward SNA and their SNS banner ad-clicking behavior. The firms which promote their products through SNSs to the young South Asian consumers may benefit from the findings of the current study.

Journal Article
TL;DR: In this article, three texture feature extraction techniques: gray level co-occurrence matrix (GLCM), Gabor filter, and global neighborhood structure (GNS) map, were used for the fault diagnosis of induction motors.
Abstract: This paper presents three texture feature extraction techniques: gray level co-occurrence matrix (GLCM), Gabor filter, and global neighborhood structure (GNS) map, for the fault diagnosis of induction motors. The texture of twodimensional (2D) gray level images is converted from acoustic emission (AE) fault signals and used for feature extraction of the fault signals. The extracted texture features are used as inputs to a multi-class support vector machine (MCSVM) to classify each fault. The Gaussian radial basis function kernel is used with MCSVM to handle non-linear fault features of acoustic emission (AE) signals. Experimental results with one-second AE signals sampled at 1 MHz showed that the GLCM-based feature extraction method outperformed the Gabor filter and the GNS map in terms of classification accuracy because of its ability to capture the spatial dependence of gray-level texture values.

Journal Article
TL;DR: In this paper, an energy-efficient location-based service is proposed to reduce power dissipation by substituting less power-intensive sensors when the smartphone is in a static state, such as on a table in an office.
Abstract: Mobile computing devices such as smartphones and personal media players are challenging because of their intrinsic features, such as their battery capacity, the constraints of wireless networks, and device limitations. First, a fundamental challenge is related to the power inefficiency of location-aware functions. Location-based applications are killer applications on smartphones, but they consume large amounts of power when operated for a long period. Thus, an energy-efficient locationbased service is proposed. Second, another challenge is related to the power inefficiency with respect to mobile cloud computing. Thus, the proposed framework reduces power dissipation by substituting less power-intensive sensors when the smartphone is in a static state, such as on a table in an office. This substitution is controlled by a finite state machine with a user movement detection strategy. The core technique employed by the proposed framework is based on a web service and SOAP because a web service is the best fit for a framework that does not depend on a specific smartphone OS platform. Thus, the proposed framework architecture was evaluated using an application for PI value computation. The results demonstrated that the mobile cloud computing platform delivered better performance by increasing the number of cloud nodes and the resource management strategy improved the power consumption.

Journal ArticleDOI
TL;DR: It is shown that chameleon hash functions and Sigma protocols are equivalent, and a transform of any suitable Sigma protocol to a chamleon hash function is provided, which enables to unify previous designs of chameLeon hash functions, seeing them all as emanating from a common paradigm.
Abstract: This paper shows that chameleon hash functions and Sigma protocols are equivalent. We provide a transform of any suitable Sigma protocol to a chameleon hash function, and also show that any chameleon hash function is the result of applying our transform to some suitable Sigma protocol. This enables us to unify previous designs of chameleon hash functions, seeing them all as emanating from a common paradigm, and also obtain new designs that are more efficient than previous ones. In particular, via a modified version of the Fiat---Shamir protocol, we obtain the fastest known chameleon hash function with a proof of security based on the standard factoring assumption. The increasing number of applications of chameleon hash functions, including on-line/off-line signing, chameleon signatures, designated-verifier signatures and conversion from weakly-secure to fully-secure signatures, make our work of contemporary interest.

Journal ArticleDOI
TL;DR: The research based on expert interviews is presented in the paper and should help management of companies to organize internal communication in the way which allows them to accomplish their knowledge management strategy.
Abstract: The world nowadays is very often titled as the world of knowledge. Knowledge is a critical differentiator for companies and source of competitive advantage. Therefore companies pay careful attention to knowledge management and put a lot of energy and money to right set-ups ensuring that knowledge is owned and used as adequately as possible. Part of that is also the transfer of knowledge between individuals. The paper deals with the topic of knowledge transfer and sharing and aims to identify the most efficient tool of internal communication in terms of knowledge transfer. The research based on expert interviews is presented in the paper. The results should help management of companies to organize internal communication in the way which allows them to accomplish their knowledge management strategy.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper investigated the relationship among online retailing information quality, e-satisfaction, etrust and young generation customer's commitment in mainland China, using confirmatory factor analysis (CFA) and structural equation modeling (SEM).
Abstract: The purpose of this study is to investigate the relationship among online retailing information quality, e-satisfaction, e-trust and young generation customer's commitment in mainland China. The study variables have considerable importance in e-tailers performance. The data were col- lected based on a sample of 383 students from Chinese universities during the first quarter of 2014. We used confirmatory factor analysis (CFA) and structural equation modeling (SEM) to evaluate the hypotheses about the relationship among model constructs. Thus, all the hypoth- eses developed in the study were positively confirmed except one. Therefore, the investigated variables are reinforcing the theory and previous research in this field. This study reveals inter- esting implications of information quality, e-satisfaction, e-trust and customers commitment that are useful to both academicians and practitioners.

Journal ArticleDOI
TL;DR: In this paper, the authors re-examine the notion of interactive hashing and prove the security of a variant of the Naor et al. protocol, which yields a more versatile interactive hashing theorem.
Abstract: Interactive hashing, introduced by Naor, Ostrovsky, Venkatesan, and Yung (J. Cryptol. 11(2):87---108, 1998), plays an important role in many cryptographic protocols. In particular, interactive hashing is a major component in all known constructions of statistically hiding commitment schemes and of statistical zero-knowledge arguments based on general one-way permutations/functions. Interactive hashing with respect to a one-way function f is a two-party protocol that enables a sender who knows y=f(x) to transfer a random hash z=h(y) to a receiver such that the sender is committed to y: the sender cannot come up with x and x? such that f(x)?f(x?), but h(f(x))=h(f(x?))=z. Specifically, if f is a permutation and h is a two-to-one hash function, then the receiver does not learn which of the two preimages {y,y?}=h ?1(z) is the one the sender can invert with respect to f. This paper reexamines the notion of interactive hashing, and proves the security of a variant of the Naor et al. protocol, which yields a more versatile interactive hashing theorem. When applying our new proof to (an equivalent variant of) the Naor et al. protocol, we get an alternative proof for this protocol that seems simpler and more intuitive than the original one, and achieves better parameters (in terms of how security preserving the reduction is).