scispace - formally typeset
Search or ask a question
Author

I-En Liao

Bio: I-En Liao is an academic researcher from National Chung Hsing University. The author has contributed to research in topics: Efficient XML Interchange & Authentication. The author has an hindex of 16, co-authored 57 publications receiving 1310 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a new password authentication scheme that can support the Diffie-Hellman key agreement protocol over insecure networks and users and the system can use the agreed session key to encrypt/decrypt their communicated messages using the symmetric cryptosystem.

279 citations

Journal ArticleDOI
TL;DR: The scheme proposed in this paper can enhance the security of Zhu and Ma's scheme and is also simple and efficient.
Abstract: In a paper recently published in the IEEE Transactions on Consumer Electronics, Zhu and Ma proposed a new authentication scheme with anonymity for wireless environments. However, this paper shows that Zhu and Ma's scheme has some security weaknesses. Therefore, in this paper, a slight modification to their scheme is proposed to improve their shortcomings. As a result, the scheme proposed in this paper can enhance the security of Zhu and Ma's scheme. Finally, the performance of this scheme is analyzed. Compared with the Zhu-Ma scheme, this scheme is also simple and efficient

274 citations

Proceedings ArticleDOI
22 Aug 2005
TL;DR: This paper shows that Das, Saxena, and Gulati's scheme has some attacks, and proposes a slight modification to their scheme to improve their weaknesses, and shows that the improved scheme can enhance the security of Das and Saxena's scheme.
Abstract: In a paper recently published in the IEEE transaction on consumer electronics, Das, Saxena, and Gulati proposed a dynamic ID-based remote user authentication scheme using smart cards that allows the users to choose and change their passwords freely, and does not maintain any verifier table. It can protect against ID-theft, replaying, forgery, guessing, insider, and stolen verifier attacks. However, this paper shows that Das, Saxena, and Gulati's scheme has some attacks. Therefore, we propose a slight modification to their scheme to improve their weaknesses. As a result, the improved scheme can enhance the security of Das, Saxena, and Gulati's scheme. In addition, the proposed scheme does not add many computational costs additionally. Compare with their scheme, our scheme is also efficient.

165 citations

Journal ArticleDOI
TL;DR: The IFP-growth (improved FP-growth) algorithm is proposed, which has less memory requirement and better performance in comparison with FP-tree based algorithms and outperforms nonordfp algorithm in most cases.
Abstract: Many algorithms have been proposed to efficiently mine association rules. One of the most important approaches is FP-growth. Without candidate generation, FP-growth proposes an algorithm to compress information needed for mining frequent itemsets in FP-tree and recursively constructs FP-trees to find all frequent itemsets. Performance results have demonstrated that the FP-growth method performs extremely well. In this paper, we propose the IFP-growth (improved FP-growth) algorithm to improve the performance of FP-growth. There are three major features of IFP-growth. First, it employs an address-table structure to lower the complexity of forming the entire FP-tree. Second, it uses a new structure called FP-tree+ to reduce the need for building conditional FP-trees recursively. Third, by using address-table and FP-tree+ the proposed algorithm has less memory requirement and better performance in comparison with FP-tree based algorithms. The experimental results show that the IFP-growth requires relatively little memory space during the mining process. Even when the minimum support is low, the space needed by IFP-growth is about one half of that of FP-growth and about one fourth of that of nonordfp algorithm. As to the execution time, our method outperforms FP-growth by one to 300 times under different minimum supports. The proposed algorithm also outperforms nonordfp algorithm in most cases. As a result, IFP-growth is very suitable for high performance applications.

85 citations

Journal ArticleDOI
TL;DR: The experimental results indicate that the effects of visualization of the proposed algorithm are better than that of other visualization methods, and the proposed visualization scheme is not only intuitively easy understanding of the clustering results, but also having good visualization effects on unlabeled data sets.
Abstract: A self-organizing map (SOM) is a nonlinear, unsupervised neural network model that could be used for applications of data clustering and visualization. One of the major shortcomings of the SOM algorithm is the difficulty for non-expert users to interpret the information involved in a trained SOM. In this paper, this problem is tackled by introducing an enhanced version of the proposed visualization method which consists of three major steps: (1) calculating single-linkage inter-neuron distance, (2) calculating the number of data points in each neuron, and (3) finding cluster boundary. The experimental results show that the proposed approach has the strong ability to demonstrate the data distribution, inter-neuron distances, and cluster boundary, effectively. The experimental results indicate that the effects of visualization of the proposed algorithm are better than that of other visualization methods. Furthermore, our proposed visualization scheme is not only intuitively easy understanding of the clustering results, but also having good visualization effects on unlabeled data sets.

63 citations


Cited by
More filters
01 Jan 2002

9,314 citations

01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.

2,188 citations

Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations

Journal Article
TL;DR: This work presents a general methodology and two protocol constructions that result in the first two public-key traitor tracing schemes with constant transmission rate in settings where plaintexts can be calibrated to be sufficientlylarge.
Abstract: An important open problem in the area of Traitor Tracing is designing a scheme with constant expansion of the size of keys (users' keys and the encryption key) and of the size of ciphertexts with respect to the size of the plaintext. This problem is known from the introduction of Traitor Tracing by Chor, Fiat and Naor. We refer to such schemes as traitor tracing with constant transmission rate. Here we present a general methodology and two protocol constructions that result in the first two public-key traitor tracing schemes with constant transmission rate in settings where plaintexts can be calibrated to be sufficiently large. Our starting point is the notion of copyrighted function which was presented by Naccache, Shamir and Stern. We first solve the open problem of discrete-log-based and public-key-based copyrighted function. Then, we observe the simple yet crucial relation between (public-key) copyrighted encryption and (public-key) traitor tracing, which we exploit by introducing a generic design paradigm for designing constant transmission rate traitor tracing schemes based on copyrighted encryption functions. Our first scheme achieves the same expansion efficiency as regular ElGamal encryption. The second scheme introduces only a slightly larger (constant) overhead, however, it additionally achieves efficient black-box traitor tracing (against any pirate construction).

649 citations

Journal ArticleDOI
TL;DR: Props and cons of the three positioning technologies are presented in terms of coverage, accuracy and reliability, followed by a discussion of the implications for LBS using the 3G iPhone and similar mobile devices.
Abstract: The 3G iPhone was the first consumer device to provide a seamless integration of three positioning technologies: Assisted GPS (A-GPS), WiFi positioning and cellular network positioning. This study presents an evaluation of the accuracy of locations obtained using these three positioning modes on the 3G iPhone. A-GPS locations were validated using surveyed benchmarks and compared to a traditional low-cost GPS receiver running simultaneously. WiFi and cellular positions for indoor locations were validated using high resolution orthophotography. Results indicate that A-GPS locations obtained using the 3G iPhone are much less accurate than those from regular autonomous GPS units (average median error of 8 m for ten 20-minute field tests) but appear sufficient for most Location Based Services (LBS). WiFi locations using the 3G iPhone are much less accurate (median error of 74 m for 58 observations) and fail to meet the published accuracy specifications. Positional errors in WiFi also reveal erratic spatial patterns resulting from the design of the calibration effort underlying the WiFi positioning system. Cellular positioning using the 3G iPhone is the least accurate positioning method (median error of 600 m for 64 observations), consistent with previous studies. Pros and cons of the three positioning technologies are presented in terms of coverage, accuracy and reliability, followed by a discussion of the implications for LBS using the 3G iPhone and similar mobile devices.

451 citations