scispace - formally typeset
Search or ask a question

Showing papers in "IEICE Transactions on Information and Systems in 2009"


Journal ArticleDOI
TL;DR: The progress of PSO research so far, and the recent achievements for application to large-scale optimization problems are reviewed.
Abstract: Particle Swarm Optimization (PSO) is a search method which utilizes a set of agents that move through the search space to find the global minimum of an objective function. The trajectory of each particle is determined by a simple rule incorporating the current particle velocity and exploration histories of the particle and its neighbors. Since its introduction by Kennedy and Eberhart in 1995, PSO has attracted many researchers due to its search efficiency even for a high dimensional objective function with multiple local optima. The dynamics of PSO search has been investigated and numerous variants for improvements have been proposed. This paper reviews the progress of PSO research so far, and the recent achievements for application to large-scale optimization problems.

98 citations


Journal ArticleDOI
TL;DR: From the result of subjective evaluation using adaptation data of 50 sentences of each style, it is shown that the proposed methods outperform the conventional speaker-dependent model training when using the same size of speech data of the target speaker.
Abstract: This paper presents methods for controlling the intensity of emotional expressions and speaking styles of an arbitrary speaker's synthetic speech by using a small amount of his/her speech data in HMM-based speech synthesis. Model adaptation approaches are introduced into the style control technique based on the multiple-regression hidden semi-Markov model (MRHSMM). Two different approaches are proposed for training a target speaker's MRHSMMs. The first one is MRHSMM-based model adaptation in which the pretrained MRHSMM is adapted to the target speaker's model. For this purpose, we formulate the MLLR adaptation algorithm for the MRHSMM. The second method utilizes simultaneous adaptation of speaker and style from an average voice model to obtain the target speaker's style-dependent HSMMs which are used for the initialization of the MRHSMM. From the result of subjective evaluation using adaptation data of 50 sentences of each style, we show that the proposed methods outperform the conventional speaker-dependent model training when using the same size of speech data of the target speaker.

46 citations


Journal ArticleDOI
TL;DR: A new order-preserving encryption scheme that provides secure queries by hiding the order is introduced and provides efficient queries because any user who has the encryption key knows the order.
Abstract: The need for data encryption that protects sensitive data in a database has increased rapidly. However, encrypted data can no longer be efficiently queried because nearly all of the data should be decrypted. Several order-preserving encryption schemes that enable indexes to be built over encrypted data have been suggested to solve this problem. They allow any comparison operation to be directly applied to encrypted data. However, one of the main disadvantages of these schemes is that they expose sensitive data to inference attacks with order information, especially when the data are used together with unencrypted columns in the database. In this study, a new order-preserving encryption scheme that provides secure queries by hiding the order is introduced. Moreover, it provides efficient queries because any user who has the encryption key knows the order. The proposed scheme is designed to be efficient and secure in such an environment. Thus, it is possible to encrypt only sensitive data while leaving other data unencrypted. The encryption is not only robust against order exposure, but also shows high performance for any query over encrypted data. In addition, the proposed scheme provides strong updates without assumptions of the distribution of plaintext. This allows it to be integrated easily with the existing database system.

41 citations


Journal ArticleDOI
TL;DR: Inter-relationship between above two types of profiles is practically discussed and studied so that frequently observed malwares behaviors can be finally identified in view of scan-malware chain.
Abstract: Considering rapid increase of recent highly organized and sophisticated malwares, practical solutions for the countermeasures against malwares especially related to zero-day attacks should be effectively developed in an urgent manner. Several research activities have been already carried out focusing on statistic calculation of network events by means of global network sensors (so-called macroscopic approach) as well as on direct malware analysis such as code analysis (so-called microscopic approach). However, in the current research activities, it is not clear at all how to inter-correlate between network behaviors obtained from macroscopic approach and malware behaviors obtained from microscopic approach. In this paper, in one side, network behaviors observed from darknet are strictly analyzed to produce scan profiles, and in the other side, malware behaviors obtained from honeypots are correctly analyzed so as to produce a set of profiles containing malware characteristics. To this end, inter-relationship between above two types of profiles is practically discussed and studied so that frequently observed malwares behaviors can be finally identified in view of scan-malware chain.

35 citations


Journal ArticleDOI
TL;DR: A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented.
Abstract: A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented. For the input length n and an optimum CFG size g, the algorithm consumes only O(g log g) space and O(n log*n) time to achieve O((log*n)log n) approximation ratio to the optimum compression, where log*n is the maximum number of logarithms satisfying log log…log n > 1. This ratio is thus regarded to almost O(log n), which is the currently best approximation ratio. While g depends on the string, it is known that g =Ω(log n) and $g=\\Omega(\\log n)$ and $g=O\\left(\\frac{n}{log_kn}\\ ight)$ for strings from k-letter alphabet[12].

31 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient implementation of pairing over MICAz, which is widely used as a sensor node for ubiquitous sensor network, and improves the speed of ηT pairing by using a new efficient multiplication specialized for ATmega128L, called the block comb method and several optimization techniques to save the number of data load/store operations.
Abstract: Pairing-based cryptography provides us many novel cryptographic applications such as ID-based cryptosystems and efficient broadcast encryptions. The security problems in ubiquitous sensor networks have been discussed in many papers, and pairing-based cryptography is a crucial technique to solve them. Due to the limited resources in the current sensor node, it is challenged to optimize the implementation of pairings on sensor nodes. In this paper we present an efficient implementation of pairing over MICAz, which is widely used as a sensor node for ubiquitous sensor network. We improved the speed of ηT pairing by using a new efficient multiplication specialized for ATmega128L, called the block comb method and several optimization techniques to save the number of data load/store operations. The timing of ηT pairing over GF(2239) achieves about 1.93sec, which is the fastest implementation of pairing over MICAz to the best of our knowledge. From our dramatic improvement, we now have much high possibility to make pairing-based cryptography for ubiquitous sensor networks practical.

29 citations


Journal ArticleDOI
TL;DR: The method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models using Gaussian mixture models (GMMs), and it is expected to work well when the true importance function has high correlation.
Abstract: The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method—which we call the Gaussian mixture KLIEP (GM—KLIEP)—is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

29 citations


Journal ArticleDOI
TL;DR: In this paper, a lexicon-driven segmentation-recognition scheme on Bangla handwritten city-name recognition is proposed for Indian postal automation, where a water reservoir concept is applied to pre-segment the slant corrected city-names into possible primitive components(characters or its parts).
Abstract: A lexicon-driven segmentation-recognition scheme on Bangla handwritten city-name recognition is proposed for Indian postal automation. In the proposed scheme, at first, binarization of the input document is done and then to take care of slanted handwriting of different individuals a slant correction technique is performed. Next, due to the script characteristics of Bangla, a water reservoir concept is applied to pre-segment the slant corrected city-names into possible primitive components(characters or its parts). Pre-segmented components of a city-name are then merged into possible characters to get the best city-name using the lexicon information. In order to merge these primitive components into characters and to find optimum character segmentation, dynamic programming (DP) is applied using total likelihood of the characters of a city-name as an objective function. To compute the likelihood of a character, Modified Quadratic Discriminant Function (MQDF) is used. The features used in the MQDF are mainly based on the directional features of the contour points of the components. We tested our system on 84 different Bangla city-names and 94.08% accuracy was obtained from the proposed system.

28 citations


Journal ArticleDOI
TL;DR: A novel way to analyze malware is proposed: focus closely on the malware's external activity toward the network to correlate with a security incident.
Abstract: Malware has been recognized as one of the major security threats in the Internet. Previous researches have mainly focused on malware's internal activity in a system. However, it is crucial that the malware analysis extracts a malware's external activity toward the network to correlate with a security incident. We propose a novel way to analyze malware: focus closely on the malware's external (i.e., network) activity. A malware sample is executed on a sandbox that consists of a real machine as victim and a virtual Internet environment. Since this sandbox environment is totally isolated from the real Internet, the execution of the sample causes no further unwanted propagation. The sandbox is configurable so as to extract specific activity of malware, such as scan behaviors. We implement a fully automated malware analysis system with the sandbox, which enables us to carry out the large-scale malware analysis. We present concrete analysis results that are gained by using the proposed system.

28 citations


Journal ArticleDOI
TL;DR: This study developed fuzzy entropy by using the distance measure for fuzzy sets and derived a similarity measure from entropy and showed by a simple example that the maximum similarity measure can be obtained using a minimum entropy formulation.
Abstract: In this study, we investigated the relationship between similarity measures and entropy for fuzzy sets. First, we developed fuzzy entropy by using the distance measure for fuzzy sets. We pointed out that the distance between the fuzzy set and the corresponding crisp set equals fuzzy entropy. We also found that the sum of the similarity measure and the entropy between the fuzzy set and the corresponding crisp set constitutes the total information in the fuzzy set. Finally, we derived a similarity measure from entropy and showed by a simple example that the maximum similarity measure can be obtained using a minimum entropy formulation.

27 citations


Journal ArticleDOI
TL;DR: This paper presents autonomous community construction technology that share service discovered by one member among others in flexible way to improve timeliness and reduce network cost and introduces novel idea of snail's pace and steady advancing search followed by swift boundary confining mechanism.
Abstract: Location Based Services (LBS) are expected to become one of the major drivers of ubiquitous services due to recent inception of GPS-enabled mobile devices, the development of Web2.0 paradigm, and emergence of 3G broadband networks. Having this vision in mind, Community Context-attribute-oriented Collaborative Information Environment (CCCIE) based Autonomous Decentralized Community System (ADCS) is proposed to enable provision of services to specific users in specific place at specific time considering various context-attributes. This paper presents autonomous community construction technology that share service discovered by one member among others in flexible way to improve timeliness and reduce network cost. In order to meet crucial goal of real-time and context-aware community construction (provision of service/ service information to users with common interests), and defining flexible service area in highly dynamic operating environment of ADCS, proposed progressive ripple based service discovery technique introduces novel idea of snail's pace and steady advancing search followed by swift boundary confining mechanism; while service area construction shares the discovered service among members in defined area to further improve timeliness and reduce network cost. Analysis and empirical results verify the effectiveness of the proposed technique.

Journal ArticleDOI
TL;DR: This paper describes work on a humor-equipped casual conversational system (chatterbot) and investigates the effect of humor on a user's engagement in the conversation and proposes a distinction between positive and negative engagement.
Abstract: The topic of Human Computer Interaction (HCI) has been gathering more and more scientific attention of late. A very important, but often undervalued area in this field is human engagement. That is, a person's commitment to take part in and continue the interaction. In this paper we describe work on a humor-equipped casual conversational system (chatterbot) and investigate the effect of humor on a user's engagement in the conversation. A group of users was made to converse with two systems: one with and one without humor. The chat logs were then analyzed using an emotive analysis system to check user reactions and attitudes towards each system. Results were projected on Russell's two-dimensional emotiveness space to evaluate the positivity/negativity and activation/deactivation of these emotions. This analysis indicated emotions elicited by the humor-equipped system were more positively active and less negatively active than by the system without humor. The implications of results and relation between them and user engagement in the conversation are discussed. We also propose a distinction between positive and negative engagement.

Journal ArticleDOI
TL;DR: In this paper, the static dependency pair method was extended to higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs), and it has been shown that it works well on HRSs without new restrictions.
Abstract: Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include λ-abstraction but STRSs do not, we restructure the static dependency pair method to allow λ-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.

Journal ArticleDOI
TL;DR: It is shown that the TTN possesses several attractive features, including constant node degree, small diameter, low cost, small average distance, moderate (neither too low, nor too high) bisection width, and high throughput and very low zero load latency, which provide better dynamic communication performance than that of other conventional and hierarchical networks.
Abstract: SUMMARY Interconnection networks play a crucial role in the performance of massively parallel computers. Hierarchical interconnection networks provide high performance at low cost by exploring the locality that exists in the communication patterns of massively parallel computers. A Tori connected Torus Network (TTN) is a 2D-torus network of multiple basic modules, in which the basic modules are 2D-torus networks that are hierarchically interconnected for higher-level networks. This paper addresses the architectural details of the TTN and explores aspects such as node degree, network diameter, cost, average distance, arc connectivity, bisection width, and wiring complexity. We also present a deadlock-free routing algorithm for the TTN using four virtual channels and evaluate the network’s dynamic communication performance using the proposed routing algorithm under uniform and various non-uniform traffic patterns. We evaluate the dynamic communication performance of TTN, TESH, MH3DT, mesh, and torus networks by computer simulation. It is shown that the TTN possesses several attractive features, including constant node degree, small diameter, low cost, small average distance, moderate (neither too low, nor too high) bisection width, and high throughput and very low zero load latency, which provide better dynamic communication performance than that of other conventional and hierarchical networks.

Journal ArticleDOI
TL;DR: A novel edge-based color constancy algorithm using support vector regression (SVR) is proposed, which is based on the higher-order structure of images and shows that the experimental results show that the algorithm is more effective than the zero-order SVRcolor constancy methods.
Abstract: Color constancy is the ability to measure colors of objects independent of the light source color. Various methods have been proposed to handle this problem. Most of them depend on the statistical distributions of the pixel values. Recent studies show that incorporation image derivatives are more effective than the direct use of pixel values. Based on this idea, a novel edge-based color constancy algorithm using support vector regression (SVR) is proposed. Contrary to existing SVR color constancy algorithm, which is computed from the zero-order structure of images, our method is based on the higher-order structure of images. The experimental results show that our algorithm is more effective than the zero-order SVR color constancy methods.

Journal ArticleDOI
TL;DR: A new multi-layered artificial immune system architecture using the ideas generated from the biological immune system for solving combinatorial optimization problems and simulation results show that the proposed algorithm performs better than traditional algorithms.
Abstract: This paper presents a new multi-layered artificial immune system architecture using the ideas generated from the biological immune system for solving combinatorial optimization problems. The proposed methodology is composed of five layers. After expressing the problem as a suitable representation in the first layer, the search space and the features of the problem are estimated and extracted in the second and third layers, respectively. Through taking advantage of the minimized search space from estimation and the heuristic information from extraction, the antibodies (or solutions) are evolved in the fourth layer and finally the fittest antibody is exported. In order to demonstrate the efficiency of the proposed system, the graph planarization problem is tested. Simulation results based on several benchmark instances show that the proposed algorithm performs better than traditional algorithms.

Journal ArticleDOI
TL;DR: An optimal online algorithm is given and it is proved that there is no (2-e)-competitive online algorithm for any positive constant e, and the problem on unweighted graphs is considered.
Abstract: The purpose of the online graph exploration problem is to visit all the nodes of a given graph and come back to the starting node with the minimum total traverse cost. However, unlike the classical Traveling Salesperson Problem, information of the graph is given online. When an online algorithm (called a searcher) visits a node ν, then it learns information on nodes and edges adjacent to ν. The searcher must decide which node to visit next depending on partial and incomplete information of the graph that it has gained in its searching process. The goodness of the algorithm is evaluated by the competitive analysis. If input graphs to be explored are restricted to trees, the depth-first search always returns an optimal tour. However, if graphs have cycles, the problem is non-trivial. In this paper we consider two simple cases. First, we treat the problem on simple cycles. Recently, Asahiro et al. proved that there is a 1.5-competitive online algorithm, while no online algorithm can be (1.25-e)-competitive for any positive constant e. In this paper, we give an optimal online algorithm for this problem; namely, we give a $\frac{1+\sqrt{3}}{2}(\simeq1.366)$-competitive algorithm, and prove that there is no $(\frac{1+\sqrt{3}}{2}-\epsilon)$-competitive algorithm for any positive constant e. Furthermore, we consider the problem on unweighted graphs. We also give an optimal result; namely we give a 2-competitive algorithm and prove that there is no (2-e)-competitive online algorithm for any positive constant e.

Journal ArticleDOI
TL;DR: AdjScales is introduced, a method for scaling similar adjectives by their strength that combines existing Web-based computational linguistic techniques in order to automatically differentiate betweenSimilar adjectives that describe the same property by strength.
Abstract: In this study we introduce AdjScales, a method for scaling similar adjectives by their strength. It combines existing Web-based computational linguistic techniques in order to automatically differentiate between similar adjectives that describe the same property by strength. Though this kind of information is rarely present in most of the lexical resources and dictionaries, it may be useful for language learners that try to distinguish between similar words. Additionally, learners might gain from a simple visualization of these differences using unidimensional scales. The method is evaluated by comparison with annotation on a subset of adjectives from WordNet by four native English speakers. It is also compared against two non-native speakers of English. The collected annotation is an interesting resource in its own right. This work is a first step toward automatic differentiation of meaning between similar words for language learners. AdjScales can be useful for lexical resource enhancement.

Journal ArticleDOI
TL;DR: Recent advances in the kernel methods are reviewed, with emphasis on scalability for massive problems.
Abstract: Kernel methods such as the support vector machine are one of the most successful algorithms in modern machine learning. Their advantage is that linear algorithms are extended to non-linear scenarios in a straightforward way by the use of the kernel trick. However, naive use of kernel methods is computationally expensive since the computational complexity typically scales cubically with respect to the number of training samples. In this article, we review recent advances in the kernel methods, with emphasis on scalability for massive problems.

Journal ArticleDOI
TL;DR: It is proved that the n-dimensional burnt pancake graph Bn is super spanning connected if and only if n ≠ 2.
Abstract: Let u and v be any two distinct vertices of an undirected graph G, which is k-connected. For 1 ≤ w ≤ k, a w-container C(u, v) of a k-connected graph G is a set of w-disjoint paths joining u and v. A w-container C(u, v) of G is a w*-container if it contains all the vertices of G. A graph G is w*-connected if there exists a w*-container between any two distinct vertices. Let κ(G) be the connectivity of G. A graph G is super spanning connected if G is i*-connected for 1 ≤ i ≤ κ(G). In this paper, we prove that the n-dimensional burnt pancake graph Bn is super spanning connected if and only if n ≠ 2.

Journal ArticleDOI
TL;DR: Simulation results based on the traveling salesman problems have demonstrated the effectiveness of the quantum crossover-based Clonal Selection Algorithm.
Abstract: Clonal Selection Algorithm (CSA), based on the clonal selection theory proposed by Burnet, has gained much attention and wide applications during the last decade. However, the proliferation process in the case of immune cells is asexual. That is, there is no information exchange during different immune cells. As a result the traditional CSA is often not satisfactory and is easy to be trapped in local optima so as to be premature convergence. To solve such a problem, inspired by the quantum interference mechanics, an improved quantum crossover operator is introduced and embedded in the traditional CSA. Simulation results based on the traveling salesman problems (TSP) have demonstrated the effectiveness of the quantum crossover-based Clonal Selection Algorithm.

Journal ArticleDOI
TL;DR: A new selective encryption scheme and a key management scheme for layered access control of H.264/SVC, which provides a high encryption efficiency by encrypting domains selectively, according to each layer type in the enhancement-layer.
Abstract: This paper proposes a new selective encryption scheme and a key management scheme for layered access control of H264/SVC This scheme encrypts three domains in hierarchical layers using different keys: intra prediction modes, motion vector difference values and sign bits of texture data The proposed scheme offers low computational complexity, low bit-overhead, and format compliance by utilizing the H264/SVC structure It provides a high encryption efficiency by encrypting domains selectively, according to each layer type in the enhancement-layer It also provides confidentiality and implicit authentication using keys derived in the proposed key management scheme for encryption Simulation results show the effectiveness of the proposed scheme

Journal ArticleDOI
TL;DR: The characteristics of Internet worms are identified in terms of their target finding strategy, propagation method and anti-detection capability, and state-of-the-art worm detection and worm containment schemes are explored.
Abstract: Worms are a common phenomenon in today's Internet and cause tens of billions of dollars in damages to businesses around the world each year. This article first presents various concepts related to worms, and then classifies the existing worms into four types- Internet worms, P2P worms, email worms and IM (Instant Messaging) worms, based on the space in which a worm finds a victim target. The Internet worm is the focus of this article. We identify the characteristics of Internet worms in terms of their target finding strategy, propagation method and anti-detection capability. Then, we explore state-of-the-art worm detection and worm containment schemes. This article also briefly presents the characteristics, defense methods and related research work of P2P worms, email worms and IM worms. Nowadays, defense against worms remains largely an open problem. In the end of this article, we outline some future directions on the worm research.

Journal ArticleDOI
TL;DR: The new residue number system (RNS) moduli sets have 4n-bit dynamic range and well-formed moduli which can result in high-performance residue to binary converters as well as efficient RNS arithmetic unit.
Abstract: In this paper, the new residue number system (RNS) moduli sets {22n, 2n - 1, 2n+1 - 1} and {22n, 2n - 1, 2n-1 - 1} are introduced. These moduli sets have 4n-bit dynamic range and well-formed moduli which can result in high-performance residue to binary converters as well as efficient RNS arithmetic unit. Next, efficient residue to binary converters for the proposed moduli sets based on mixed-radix conversion (MRC) algorithm are presented. The converters are ROM-free and they are realized using carry-save adders and modulo adders. Comparison with the other residue to binary converters for 4n-bit dynamic range moduli sets shown that the presented designs based on new moduli sets {22n, 2n - 1, 2n+1 - 1} and {22n, 2n - 1, 2n-1 - 1} are improved the conversion delay and result in hardware savings. Also, the proposed moduli sets can lead to efficient binary to residue converters, and they can speed-up internal RNS arithmetic processing, compared with the other 4n-bit dynamic range moduli sets.

Journal ArticleDOI
TL;DR: This paper considers the problem for finding preference lists of n women over n men such that the men-proposing deferred acceptance algorithm (Gale-Shapley algorithm) adopted to the lists produces µ, and shows a simple necessary and sufficient condition for the existence of a set of preferences of women over men.
Abstract: This paper deals with a strategic issue in the stable marriage model with complete preference lists (i.e., a preference list of an agent is a permutation of all the members of the opposite sex). Given complete preference lists of n men over n women, and a marriage µ, we consider the problem for finding preference lists of n women over n men such that the men-proposing deferred acceptance algorithm (Gale-Shapley algorithm) adopted to the lists produces µ. We show a simple necessary and sufficient condition for the existence of a set of preference lists of women over men. Our condition directly gives an O(n2) time algorithm for finding a set of preference lists, if it exists.

Journal ArticleDOI
TL;DR: This paper gives a generic construction based on secure PEKS and tag-KEM/DEM schemes and achieves modular design, and argues the previous security model is not complete regarding keyword privacy and the previous constructions are secure only in the random oracle model.
Abstract: In this paper, we study the problem of secure integrating public key encryption with keyword search (PEKS) with public key data encryption (PKE). We argue the previous security model is not complete regarding keyword privacy and the previous constructions are secure only in the random oracle model. We solve these problems by first defining a new security model, then give a generic construction which is secure in the new security model without random oracles. Our construction is based on secure PEKS and tag-KEM/DEM schemes and achieves modular design. We also give some applications and extensions for our construction. For example, instantiate our construction with proper components, we have a concrete scheme without random oracles, whose performance is even competitive to the previous schemes with random oracles.

Journal ArticleDOI
TL;DR: The proposed feature extractor, which incorporates two MLNs and an In/En network, was found to provide a higher phoneme correct rate with fewer mixture components in the HMMs, and has a low computation cost.
Abstract: This paper describes a distinctive phonetic feature (DPF) extraction method for use in a phoneme recognition system; our method has a low computation cost. This method comprises three stages. The first stage uses two multilayer neural networks (MLNs): MLNLF-DPF, which maps continuous acoustic features, or local features (LFs), onto discrete DPF features, and MLNDyn, which constrains the DPF context at the phoneme boundaries. The second stage incorporates inhibition/enhancement (In/En) functionalities to discriminate whether the DPF dynamic patterns of trajectories are convex or concave, where convex patterns are enhanced and concave patterns are inhibited. The third stage decorrelates the DPF vectors using the Gram-Schmidt orthogonalization procedure before feeding them into a hidden Markov model (HMM)-based classifier. In an experiment on Japanese Newspaper Article Sentences (JNAS) utterances, the proposed feature extractor, which incorporates two MLNs and an In/En network, was found to provide a higher phoneme correct rate with fewer mixture components in the HMMs.

Journal ArticleDOI
TL;DR: This paper presents a fuzzy-based path selection method for improving the security level, in which each cluster chooses paths based on the detection power of false data and energy efficiency.
Abstract: This paper presents a fuzzy-based path selection method for improving the security level, in which each cluster chooses paths based on the detection power of false data and energy efficiency.

Journal ArticleDOI
TL;DR: This paper surveys several state-of-the-art reordering techniques employed in Statistical Machine Translation systems and gives a brief survey and classification of several well-known reordering approaches.
Abstract: This paper surveys several state-of-the-art reordering techniques employed in Statistical Machine Translation systems. Reordering is understood as the word-order redistribution of the translated words. In original SMT systems, this different order is only modeled within the limits of translation units. Relying only in the reordering provided by translation units may not be good enough in most language pairs, which might require longer reorderings. Therefore, additional techniques may be deployed to face the reordering challenge. The Statistical Machine Translation community has been very active recently in developing reordering techniques. This paper gives a brief survey and classification of several well-known reordering approaches.

Journal ArticleDOI
TL;DR: This paper investigates the reliability of general type shared protection systems i.e. M for N (M: N) that can typically be applied to various telecommunication network devices and shows that, under a certain condition, the probability distribution of TTFF can be approximated by a simple exponential distribution.
Abstract: In this paper we investigate the reliability of general type shared protection systems i.e. M for N (M: N) that can typically be applied to various telecommunication network devices. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner. The mathematical analysis gives the closed-form solution of the availability, the recursive computing algorithm of the MTTFF (Mean Time to First Failure) and the MTTF (Mean Time to Failure) perceived by an arbitrary end user. We also show that, under a certain condition, the probability distribution of TTFF (Time to First Failure) can be approximated by a simple exponential distribution. The analysis provides useful information for the analysis and the design of not only the telecommunication network devices but also other general shared protection systems that are subject to service level agreements (SLA) involving user-perceived reliability measures.