scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Information Processing Systems in 2009"


Journal ArticleDOI
TL;DR: A discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has been provided.
Abstract: Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.

751 citations


Journal ArticleDOI
TL;DR: A novel high-level security metrics objective taxonomization model for soft-ware-intensive systems that focuses on the security level and security performance of technical systems while taking into account the alignment of metrics objectives with different business and other management goals is introduced.
Abstract: We introduce a novel high-level security metrics objective taxonomization model for soft-ware-intensive systems. The model systematizes and organizes security metrics development activities. It focuses on the security level and security performance of technical systems while taking into account the alignment of metrics objectives with different business and other management goals. The model emphasizes the roles of security-enforcing mechanisms, the overall security quality of the system un-der investigation, and secure system lifecycle, project and business management. Security correctness, effectiveness and efficiency are seen as the fundamental measurement objectives, determining the di-rections for more detailed security metrics development. Integration of the proposed model with risk-driven security metrics development approaches is also discussed. Keywords: Security Metrics, Security Objectives, Tax onomy, Correctness, Effectiveness, Efficiency 1. Introduction The increasing complexity and connectivity of software-intensive systems, products and services are boosting the needs for pertinent and reliable software security and trusted system solutions. Systematic approaches to measur-ing security are needed to obtain evidence of the security level and performance in systems, products and services. In addition, early security evidence will enable cost-effective secure software development. It is easier to make business and engineering decisions concerning security if sufficient and credible evidence of security is available. The field of developing security metrics systematically is young. The complication behind the immaturity of secu-rity metrics is that the current practice of security is still a highly diverse field, and holistic and widely accepted ap-proaches are still missing [1]. , attempts to measure secu-rity have only obtained limited success [2]. Lately, security metrics has become an emerging research area rapidly gaining momentum. The main contribution of this study is to introduce a novel model for security metrics objective taxonomization of technical systems and discuss the motivation for it. The model systematizes and organizes security metrics devel-opment. We analyze the role of different emphasis areas and fundamental measurement objectives and show how the model can be integrated with risk-driven security met-rics development activities. In our model, we have made a premeditated choice not to divide security metrics into technical, operational and organizational metrics, which is the most common classification. The rest of this article is organized in the following way. Section 2 analyzes related work, and Section 3 gives a short introduction to security metrics. Section 4 presents our Security Metrics Objective Segments (SMOS) model, and Section 5 discusses the design of security metrics tax-onomies with the help of the proposed model. Section 6 analyzes how the model can be integrated with the security metrics development process. Section 7 incorporates a dis-cussion on the results and security metrics in general terms, and finally, Section 8 gives conclusions and finalizes the study with some future research questions.

60 citations


Journal ArticleDOI
TL;DR: This paper proposes a dynamic reservation scheme of Physical Cell Identities (PCI) for 3GPP LTE femtocell systems, and results show that the proposed scheme re- duces average delay for identifying detected cells, and increases network capacity within equal delay constraints.
Abstract: A large number of phone calls and data services will take place in indoor environments. In Long Term Evolution (LTE), femtocell, as a home base station for indoor coverage extension and wideband data service, has recently gained significant interests from operators and consumers. Since femtocell is frequently turned on and off by a personal owner, not by a network operator, one of the key issues is that femtocell should be identified autonomously without system information to support handover from macrocell to femtocell. In this paper, we propose a dynamic reservation scheme of Physical Cell Identities (PCI) for 3GPP LTE femtocell systems. There are several reserving types, and each type reserves a different number of PCIs for femtocell. The transition among the types depends on the deployed number of femtocells, or the number of PCI confusion events. Accordingly, flexible use of PCIs can decrease PCI confusion. This reduces searching time for femtocell, and it is helpful for the quick handover from macrocell to femtocell. Simulation results show that our proposed scheme re- duces average delay for identifying detected cells, and increases network capacity within equal delay constraints. The 3rd Generation Partnership Project (3GPP) is work- ing on the standardization of asynchronous communication systems. This technology is being enhanced gradually en- suring higher user data rate, bigger system capacity, and lower cost. The wideband CDMA (WCDMA) system is standardized with 3GPP release 99/4 which is being de- ployed in the world. Release 5 is related to High Speed Downlink Packet Access (HSDPA), and it improves the downlink packet transmission speed theoretically up to 14.4 Mbps. High Speed Uplink Packet Access (HSUPA) is enhanced up to 5.76 Mbps in uplink, and it is standardized with the Release 6. We simply mention both HSDPA and HSUPA as High Speed Packet Access (HSPA). In release 7, High Speed Packet Access Evolution (eHSPA, HSPA+) is standardized. eHSPA is based on the HSPA network with a simple upgrade, and it supports more bandwidth efficiency and lower latency. The maximum data rate is 28.8 Mbps in downlink and 11.5 Mbps in uplink (1). How- ever, users still require further system improvements. The technology is dramatically enhanced in release 8, where the standard of Long Term Evolution (LTE) is currently being established. The main objectives of LTE are higher data rates, lower latency, increased capacity, enhanced coverage, and an optimized system for the packet switching network (2). LTE also considers a femtocell, which is referred to as a home base station for an indoor coverage extension and overall network performance enhancement (3). Recently, LTE-Advanced standard targeting of 1Gbps for low mobil- ity is being discussed in release 9. We look into general features and requirements of fem- tocells. Important issues related to access control are de- scribed, followed by our contributions and organization of the paper.

38 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of scheduling precedence-constrained parallel applications in DCSs, and presents two scheduling algorithms that adopt dynamic voltage and fre- quency scaling (DVFS) to minimize energy consumption.
Abstract: Power consumed by modern computer systems, particularly servers in data centers has al- most reached an unacceptable level. However, their energy consumption is often not justifiable when their utilization is considered; that is, they tend to consume more energy than needed for their comput- ing related jobs. Task scheduling in distributed computing systems (DCSs) can play a crucial role in increasing utilization; this will lead to the reduction in energy consumption. In this paper, we address the problem of scheduling precedence-constrained parallel applications in DCSs, and present two en- ergy-conscious scheduling algorithms. Our scheduling algorithms adopt dynamic voltage and fre- quency scaling (DVFS) to minimize energy consumption. DVFS, as an efficient power management technology, has been increasingly integrated into many recent commodity processors. DVFS enables these processors to operate with different voltage supply levels at the expense of sacrificing clock fre- quencies. In the context of scheduling, this multiple voltage facility implies that there is a trade-off be- tween the quality of schedules and energy consumption. Our algorithms effectively balance these two performance goals using a novel objective function and its variant, which take into account both goals; this claim is verified by the results obtained from our extensive comparative evaluation study.

30 citations


Journal ArticleDOI
TL;DR: A scalable platform that requires minimum human interaction during set-up and monitoring of wireless biosensors is proposed, which could increase the quality of life and significantly lower healthcare costs for everyone in general and for the elderly and those with disabilities in particular.
Abstract: In this paper, we propose a framework for the real-time monitoring of wireless biosensors. This is a scalable platform that requires minimum human interaction during set-up and monitoring. Its main components include a biosensor, a smart gateway to automatically set up the body area network, a mechanism for delivering data to an Internet monitoring server, and automatic data collection, profiling and feature extraction from bio-potentials. Such a system could increase the quality of life and significantly lower healthcare costs for everyone in general, and for the elderly and those with disabilities in particular.

22 citations


Journal ArticleDOI
TL;DR: A simple, efficient algorithm to detect nodes that are near the boundary of the sensor field as well as near the boundaries of holes, which relies purely on the connectivity information of the underlying communication graph and does not require any information on the location of nodes.
Abstract: The awareness of boundaries in wireless sensor networks has many benefits. The identification of boundaries is especially challenging since typical wireless sensor networks consist of low-capability nodes that are unaware of their geographic location. In this paper, we propose a simple, efficient algorithm to detect nodes that are near the boundary of the sensor field as well as near the boundaries of holes. Our algorithm relies purely on the connectivity information of the underlying communication graph and does not require any information on the location of nodes. We introduce the 2-neighbor graph concept, and then make use of it to identify nodes near boundaries. The results of our experiment show that our algorithm carries out the task of topological boundary detection correctly and efficiently. Keywords: Wireless sensor network, Hole, Boundary detection, 2-neighbor graph 1. Introduction The task of boundary detection in wireless sensor networks is stated as follows: Given a wireless sensor network deployed in an area called the sensor field, each node must ascertain whether it is located near the boundary of the sensor field as well as the boundaries of holes. In this paper, we focus on boundary detection in wireless sensor networks without information on the location of nodes. The proposed solutions will rely purely on topological features, i.e. the connectivity information of the underlying communication graph. We emphasize the topological (topology- based) methods for the following reasons. First, it would be costly to equip each node with a positioning device such as a GPS unit to obtain location information at the nodes. With thousands of nodes deployed, we would have to spend a lot of money on positioning devices. In order to reduce the cost, we may equip only a few nodes, called anchors, with positioning devices and apply a localization algorithm to infer the locations of non-anchor nodes [11]. Unfortunately, to date no localization algorithm that can give derived locations that reflect the true locations of nodes has been developed. Second, positioning devices consume a lot of the energy of the nodes, which cannot be recharged, thereby reducing the lifetime of the nodes. In addition, we cannot always obtain exact location information since positioning devices cannot work entirely free from error. Thus, the requirement of location information available at the nodes will lead to expensive and short-lived sensors networks. Boundary detection has many applications. Hole formation is often caused by extreme events such as fire, earthquake, inundation, and so forth. As such, the identification of holes is very useful in wireless sensor network applications that monitor such events. For some sensor network applications such as data-centric storage, which does not require the true locations of sensor nodes, invented (virtual) locations can be used instead. Several methods for computing virtual locations have been proposed [11]. But, as already examined in [7], the resultant virtual coordinates are distorted in comparison to the true geometry of the communication graph. The authors of [7] showed that boundary awareness can be used to build less distorted virtual coordinates. In addition, boundary information is helpful to both topology-based [2,7-9] and location-based routing [4]. From our viewpoint, we may use boundary information to build a routing protocol that can avoid holes and produce optimal paths. This will be a part of our future research. Up to now, and recently, only a few topological boundary detection algorithms have been proposed [5-7]. These algorithms are not competitors with our approach, with the exception of the one introduced in [7], since they seem feasible only for uniform and very high density node distributions. The algorithm in [7] uses beacon and isolevel concepts to identify nodes near boundaries. The issues posed in [7] concern beacon selections. As far as we know, beacon selection is as complex as leader election [13]. With four global beacons and many local beacons, the time required to select beacons incurs a considerable cost. In addition, it floods the network several times. This contributes to the convergence time remarkably.

13 citations


Journal ArticleDOI
TL;DR: A fault-tolerant wireless AMR network (FWAMR) is proposed, which is designed to improve the robustness of the conventional ZigBee-based AMR systems by coping well with dynamic error environments.
Abstract: Due to low cost, low-power, and scalability, ZigBee is considered an efficient wireless AMR infrastructure. However, these characteristics of ZigBee can make the devices more vulnerable to unexpected error environments. In this paper, a fault-tolerant wireless AMR network (FWAMR) is proposed, which is designed to improve the robustness of the conventional ZigBee-based AMR sys-tems by coping well with dynamic error environments. The experimental results demonstrate that the FWAMR is considerably fault-tolerant compared with the conventional ZigBee-based AMR network. Keywords: AMR, AMI, Fault Tolerance, ZigBee 1. Introduction Automatic meter reading (AMR) enables utility companies to communicate remotely with residential utility meters using communications. Traditionally, field technicians ac-cessed utility meters on the customer premises to record usage information manually. With today’s smart meters, utility companies (electricity, gas, water, etc.) can now avoid this costly manual work, and set up two-way data communications between the utility’s data center and the meters. More detailed customer information can serve to offer enhanced services such as time-of-use pricing, man-agement of demand, and load profiles. Remote meter reading systems have been developed in parallel with various network technologies for many years. The communication technology used for AMR systems can be largely categorized into wired and wireless. For a wired AMR network, a Telephone network [3, 11] or PLC [6-7, 9] has been used. In particular, PLC is an efficient way for power metering since an inherent means of communication already exists within the infrastructure. So metering data can be transmitted over the power line itself via power line communications. However, for the installation cost, and safety, gas meters or water meters cannot be electrically connected together by a power line. So, recently, the use of wireless technology is more common. Wireless AMR networks include Cellular networks [4, 10] WLAN [4, 8], Zigbee (or IEEE802.15.4) [1-2], and other short range wireless systems [12-13]. Oska et al. [9] proposes a hybrid system: WLAN communication consisting of PLC-ethernet bridges. Recently, AMR systems associated with wireless sensor networks are introduced in [5, 14]. In particular, due to several advantages of easy inexpensive installation and development cost, flexibility, scalability, and so on, the popularity of ZigBee-based AMR systems is explosively arising. However, the ZigBee devices are extremely limited in resources including processing, memory, and power. In addition, ZigBee is an autonomous network. Therefore, the network is not always in user-intervention, but operates self-regulated. Sometimes these characteristics of ZigBee can make the devices more vulnerable to unexpected error environments. In this paper, a fault-tolerant wireless AMR network (FWAMR) is proposed, which is designed to improve the robustness of the conventional ZigBee-based AMR systems by coping well with dynamic error environments. The remainder of this paper is organized as follows: Sec-tion 2 examines some weaknesses of ZigBee-based AMR networks. The proposed FWAMR scheme is introduced in Section 3. Performance is evaluated in Section 4 through experiments based on real system implementation. Finally, this paper concludes with Section 5.

11 citations


Journal ArticleDOI
TL;DR: The strategy of simplified population-based incremental learning (PBIL) is adopted to reduce the problems with memory consumption and search inefficiency, and a scheme for controlling the distance of neighbors for disparity smoothness is inserted to obtain a wide-area consistency of disparities.
Abstract: To solve the general problems surrounding the application of genetic algorithms in stereo matching, two measures are proposed. Firstly, the strategy of simplified population-based incremental learning (PBIL) is adopted to reduce the problems with memory consumption and search inefficiency, and a scheme for controlling the distance of neighbors for disparity smoothness is inserted to obtain a wide-area consistency of disparities. In addition, an alternative version of the proposed algorithm, without the use of a probability vector, is also presented for simpler set-ups. Secondly, programmable graphics-hardware (GPU) consists of multiple multi-processors and has a powerful parallelism which can perform operations in parallel at low cost. Therefore, in order to decrease the running time further, a model of the proposed algorithm, which can be run on programmable graphics-hardware (GPU), is presented for the first time. The algorithms are implemented on the CPU as well as on the GPU and are evaluated by experiments. The experimental results show that the proposed algorithm offers better performance than traditional BMA methods with a deliberate relaxation and its modified version in terms of both running speed and stability. The comparison of computation times for the algorithm both on the GPU and the CPU shows that the former has more speed-up than the latter, the bigger the image size is.

10 citations


Journal ArticleDOI
TL;DR: The experimental results indicate that the proposed real-time scheduling strategy (RTS) has a good performance both in communication throughput and over-load.
Abstract: Most of the tasks in wireless sensor networks (WSN) are requested to run in a real-time way Neither EDF nor FIFO can ensure real-time scheduling in WSN A real-time scheduling strategy (RTS) is proposed in this paper All tasks are divided into two layers and endued diverse priorities RTS utilizes a preemptive way to ensure hard real-time scheduling The experimental results indicate that RTS has a good performance both in communication throughput and over-load

10 citations


Journal ArticleDOI
TL;DR: The proposed storage model semantically classifies OWL elements, and stores an ontology in separately classified tables according to the classification, and enhances the query processing performance by using hierarchical knowledge.
Abstract: As well as providing various APIs for the development of inference engines and storage models, Jena is widely used in the development of systems or tools related with Web ontology management. However, Jena still has several problems with regard to the development of real applications, one of the most important being that its query processing performance is unacceptable. This paper proposes a storage model to improve the query processing performance of the original Jena storage. The proposed storage model semantically classifies OWL elements, and stores an ontology in separately classified tables according to the classification. In particular, the hierarchical knowledge is managed, which can make the processing performance of inferable queries enhanced and stores information. It enhances the query processing performance by using hierarchical knowledge. For this paper an experimental evaluation was conducted, the results of which showed that the proposed storage model provides a improved performance compared with Jena.

9 citations


Journal ArticleDOI
TL;DR: This paper provides an efficient method of automated in-text keyword tagging based on large-scale controlled term collection or keyword dictionary, where the computational complexity of O(mN) – if a pattern matching algorithm is used – can be reduced to O(mlogN) if an Information Retrieval technique is adopted.
Abstract: As shown in Wikipedia, tagging or cross-linking through major keywords in a document collection improves not only the readability of documents but also responsive and adaptive navigation among related documents. In recent years, the Semantic Web has increased the importance of social tagging as a key feature of the Web 2.0 and, as its crucial phenotype, Tag Cloud has emerged to the public. In this paper we provide an efficient method of automated in-text keyword tagging based on large-scale controlled term collection or keyword dictionary, where the computational complexity of O(mN) – if a pattern matching algorithm is used – can be reduced to O(mlogN) – if an Information Retrieval technique is adopted – while m is the length of target document and N is the total number of candidate terms to be tagged. The result shows that automatic in-text tagging with keywords filtered by Information Retrieval speeds up to about 6 ~ 40 times compared with the fastest pattern matching algorithm.

Journal ArticleDOI
TL;DR: This paper suggests a way to map the input space to a reduced space, which may avoid the unreliability, ambiguity and redundancy of individual terms as descriptors in LDA.
Abstract: Text data has always accounted for a major portion of the world’s information. As the volume of information increases exponentially, the portion of text data also increases significantly. Text classification is therefore still an important area of research. LDA is an updated, probabilistic model which has been used in many applications in many other fields. As regards text data, LDA also has many applications, which has been applied various enhancements. However, it seems that no applications take care of the input for LDA. In this paper, we suggest a way to map the input space to a reduced space, which may avoid the unreliability, ambiguity and redundancy of individual terms as descriptors. The purpose of this paper is to show that LDA can be perfectly performed in a “clean and clear” space. Experiments are conducted on 20 News Groups data sets. The results show that the proposed method can boost the classification results when the appropriate choice of rank of the reduced space is determined.

Journal ArticleDOI
TL;DR: A method of man-in-the-middle attack based on ARP spoofing, and a method of preventing such attacks is proposed.
Abstract: 【Man-in-the-middle attack is used wildly as a method of attacking the network. To discover how this type of attack works, this paper describes a method of man-in-the-middle attack based on ARP spoofing, and proposes a method of preventing such attacks.】

Journal ArticleDOI
TL;DR: This paper presents compact cryptographic hardware architecture suitable for the Mobile Trusted Module (MTM) that requires low-area and low-power characteristics.
Abstract: This paper presents compact cryptographic hardware architecture suitable for the Mobile Trusted Module (MTM) that requires low-area and low-power characteristics. The built-in crypto-graphic engine in the MTM is one of the most important circuit blocks and contributes to the perform-ance of the whole platform because it is used as the key primitive supporting digital signature, platform integrity and command authentication. Unlike personal computers, mobile platforms have very strin-gent limitations with respect to available power, physical circuit area, and cost. Therefore special archi-tecture and design methods for a compact cryptographic hardware module are required. The proposed cryptographic hardware has a chip area of 38K gates for RSA and 12.4K gates for unified SHA-1 and SHA-256 respectively on a 0.25um CMOS process. The current consumption of the proposed crypto-graphic hardware consumes at most 3.96mA for RSA and 2.16mA for SHA computations under the 25MHz. Keywords:

Journal ArticleDOI
TL;DR: A utility-based data rate allocation algorithm to provide high-quality mobile video streaming over femtocell networks that is capable of minimizing the transmission power of backhaul connections while guaranteeing a high overall quality of service for all users of the same binder is proposed.
Abstract: This paper proposes a utility-based data rate allocation algorithm to provide high-quality mobile video streaming over femtocell networks. We first derive a utility function to calculate the optimal data rates for maximizing the aggregate utilities of all mobile users in the femtocell. The total sum of optimal data rates is limited by the link capacity of the backhaul connections. Furthermore, electromagnetic cross-talk poses a serious problem for the backhaul connections, and its influence passes on to mobile users, as well as causing data rate degradation in the femtocell networks. We also have studied a fixed margin iterative water-filling algorithm to achieve the target data rate of each backhaul connection as a counter-measure to the cross-talk problem. The results of our simulation show that the algorithm is capable of minimizing the transmission power of backhaul connections while guaranteeing a high overall quality of service for all users of the same binder. In particular, it can provide the target data rate required to maximize user satisfaction with the mobile video streaming service over the femtocell networks.

Journal ArticleDOI
TL;DR: A spatial query processing scheme- Minimum Bounding Area Based Scheme has a purpose to decrease the number of outgoing messages during query processing and can reduce unnecessary message propagations for query processing.
Abstract: Sensors are deployed to gather physical, environmental data in sensor networks. Depend- ing on scenarios, it is often assumed that it is difficult for batteries to be recharged or exchanged in sensors. Thus, sensors should be able to process users' queries in an energy-efficient manner. This pa- per proposes a spatial query processing scheme- Minimum Bounding Area Based Scheme. This scheme has a purpose to decrease the number of outgoing messages during query processing. To do that, each sensor has to maintain some partial information locally about the locations of descendent nodes. In the initial setup phase, the routing path is established. Each child node delivers to its parent node the location information including itself and all of its descendent nodes. A parent node has to maintain several minimum bounding boxes per child node. This scheme can reduce unnecessary message propagations for query processing. Finally, the experimental results show the effectiveness of the pro- posed scheme.

Journal ArticleDOI
TL;DR: This paper analyses the context framework, an example of existing frameworks, using a Petri net, and analyzing the advantages and disadvantages of it, and presents a new framework, PAWS, which can solve overhead problems of context in SOAP messages.
Abstract: Many researchers have developed frameworks capable of handling context information and able to be adapted and used by any Web service. However, no research has been conducted on the systematic analysis of existing frameworks. This paper analyses the context framework, an example of existing frameworks, using a Petri net, and analyzing the advantages and disadvantages of it. Then, a Petri net model is introduced, with disadvantages removed. Based on the model, a new framework is presented. The proposed PAWS (privacy aware Web services) framework provides extension to context management and communicates flexible context information for every session. The proposed framework can solve overhead problems of context in SOAP messages. It also protects user privacy according to user preferences.

Journal ArticleDOI
TL;DR: A new variational level set evolving algorithm without re-initialization is presented, which consists of an internal energy term that penalizes deviations of the level set function from a signed distance function, and an external energyterm that drives the motion of the zero level set toward the desired image feature.
Abstract: Level set methods are the numerical techniques for tracking interfaces and shapes. They have been successfully used in image segmentation. A new variational level set evolving algorithm without re-initialization is presented in this paper. It consists of an internal energy term that penalizes deviations of the level set function from a signed distance function, and an external energy term that drives the motion of the zero level set toward the desired image feature. This algorithm can be easily implemented using a simple finite difference scheme. Meanwhile, not only can the initial contour can be shown anywhere in the image, but the interior contours can also be automatically detected. Keywords: Level Set Methods, Evolving Algorithm, without Re-initialization, Image Segmentation 1. Introduction Level set methods were first introduced by Osher and Sethian [1] to capture moving fronts. Active contours were introduced in order to segment objects in images using dynamic curves. Level set methods provide mathematical and computational tools for the tracking of evolving interfaces with sharp corners and cusps, and topological changes. They efficiently compute optimal robot paths around obstacles, and extract clinically useful features from the noisy output of images. In traditional level set methods, re-initialization, a technique for periodically re-initializing the level set function to a signed distance function, has been used as a numerical algorithm for maintaining stable curve evolution. However, many proposed re-initialization schemes have the undesirable side effect of moving the zero level set away from its original location. As such, there are certain drawbacks associated with re-initialization [2]. In this paper, we present a new variational level set evolving algorithm without re-initialization [3]. It consists of an internal energy term that penalizes deviations of the level set function from a signed distance function, and an external energy term that drives the motion of the zero level set toward the desired image feature [4]. This algorithm can be computed more efficiently and implemented using only a very simple finite difference scheme. Meanwhile, the initial contour can be anywhere in the image and a larger time step can be used to speed up the evolution.

Journal ArticleDOI
TL;DR: The problem is proposed to be solved by using Early Binding Updates (EBU) and Security Access Gateway (SAG) to offer a complete mechanism with low latency, low handoff mechanism calculation, and high security.
Abstract: By providing ubiquitous Internet connectivity, wireless networks offer more convenient ways for users to surf the Internet. However, wireless networks encounter more technological challenges than wired networks, such as bandwidth, security problems, and handoff latency. Thus, this paper proposes new technologies to solve these problems. First, a Security Access Gateway (SAG) is proposed to solve the security issue. Originally, mobile terminals were unable to process high security calculations because of their low calculating power. SAG not only offers high calculating power to encrypt the encryption demand of SAG’s domain, but also helps mobile terminals to establish a multiple safety tunnel to maintain a secure domain. Second, Robust Header Compression (RoHC) technology is adopted to increase the utilization of bandwidth. Instead of Access Point (AP), Access Gateway (AG) is used to deal with the packet header compression and de-compression from the wireless end. AG’s high calculating power is able to reduce the load on AP. In the original architecture, AP has to deal with a large number of demands by header compression/de-compression from mobile terminals. Eventually, wireless networks must offer users “Mobility” and “Roaming”. For wireless networks to achieve “Mobility” and “Roaming,” we can use Mobile IPv6 (MIPv6) technology. Nevertheless, such technology might cause latency. Furthermore, how the security tunnel and header compression established before the handoff can be used by mobile terminals handoff will be another great challenge. Thus, this paper proposes to solve the problem by using Early Binding Updates (EBU) and Security Access Gateway (SAG) to offer a complete mechanism with low latency, low handoff mechanism calculation, and high security.

Journal ArticleDOI
TL;DR: A novel way to solve the vehicle routing problem with pickups and deliveries (VRPPD) is proposed by introducing a two-phase heuristic routing algorithm which consists of a clustering phase and uses the geometrical center of a cluster and route establishment phase by applying aTwo-way search of each route after applying the TSP algorithm on each route.
Abstract: The classical vehicle routing problem (VRP) can be extended by including customers who want to send goods to the depot. This type of VRP is called the vehicle routing problem with pickups and deliveries (VRPPD). This study proposes a novel way to solve VRPPD by introducing a two-phase heuristic routing algorithm which consists of a clustering phase and uses the geometrical center of a cluster and route establishment phase by applying a two-way search of each route after applying the TSP algorithm on each route. Experimental results show that the suggested algorithm can generate bet-ter initial solutions for more computer-intensive meta-heuristics than other existing methods such as the giant-tour-based partitioning method or the insertion-based method. Keywords: Vehicle Routing Problem, Heuristic Algorithm, Initial Solution 1. Introduction The vehicle routing problem (VRP) is a combinatorial optimization and nonlinear programming problem seeking to service a number of customers with a fleet of vehicles. VRP has been an important problem in the fields of trans-portation, distribution and logistics since Dantzig and Ramser [1] first proposed the problem. The vehicle routing problem with pickups and deliveries (VRPPD) is an exten-sion of the classical VRP. Recently, research on VRPPD has peaked essentially because pickup demands for pack-aging and used product returns from customer locations have increased substantially due to environmental and gov-ernment regulations, and the fact that integrating pickups with deliveries maximally utilizes vehicle capacity and saves money. So, VRPPD must take into account the goods that customers return to the delivery vehicle. This restric-tion makes the planning problem more difficult and in-creases travel distances or the number of vehicles. Re-stricted situations, in which there are no interchanges of goods between customers and all delivery demands start from the depot and all pickup demands are brought back to the depot, are usually considered in VRPPD. The VRPPD model can be classified into three: Deliv-ery-first and pickup-second; mixed pickups and deliveries; and simultaneous pickup and deliveries. The assumption of the delivery-first and pickup-second model is that all deliv-eries must be made before any pickups. This assumption was due to the fact that vehicles were rear-loaded and be-cause rearranging delivery loads onboard to accommodate new pickup loads was difficult. However, in recent days, most vehicles have side-loading as well as rear-loading functions. To accommodate new pickup loads, rearranging delivery loads onboard is no longer a requirement. Hence, the assumption that all deliveries must be made before any pickup can occur can be relaxed, allowing for the mixed pickups and deliveries model in which deliveries and pick-ups may occur in any sequence on a vehicle route. When customers can simultaneously receive and send goods, it is referred to as simultaneous pickup and delivery. A VRPPD solution is feasible only if the following three conditions are satisfied: delivery-feasibility, pickup-feasibility and load-feasibility. Delivery-feasibility and pickup-feasibility mean that both the total delivery and the total pickup de-mands on any vehicle route do not exceed the maximum capacity of the vehicle; and load-feasibility means that the maximum capacity of the vehicle is not exceeded at any point on the route. This study focuses on solving mixed pickups and deliv-eries by applying the two-phase heuristic algorithm. The first phase selects cluster seed as the farthest node (cus-tomers) among unclustered nodes and cluster nodes by using the notion of the geometrical center of a cluster. The result of the first phase satisfies the delivery-feasibility and pickup-feasibility since the total deliveries and pickups on any route is less than or equal to the maximum capacity of the vehicle. And the second phase of the algorithm applies the TSP algorithm on each cluster to find the shortest route regardless of load-feasibility and then finds routes that sat-DOI : 10.3745/JIPS.2009.5.4.237

Journal ArticleDOI
TL;DR: The bidding problem with regard to resources in the reverse auction resource allocation model has been investigated and the new bidding strategies have been proposed and investigated.
Abstract: Grid computing is a new technology which involves efforts to create a huge source of processing power by connecting computational resources throughout the world. The key issue of such environments is their resource allocation and the appropriate job scheduling strategy. Several approaches to scheduling in these environments have been proposed to date. Market driven scheduling as a decentralized solution for such complicated environments has introduced new challenges. In this paper the bidding problem with regard to resources in the reverse auction resource allocation model has been investigated and the new bidding strategies have been proposed and investigated.

Journal ArticleDOI
TL;DR: The core network technologies that will enable the deployment of a high-quality IP TV service are introduced, and a suitable methodology for application and deployment policies on each technology to lead the establishment and globalization of the IPTV service is proposed.
Abstract: It is absolutely essential to implement advanced IP network technologies such as QoS, Multicast, High Availability, and Security in order to provide real-time services like IPTV via IP backbone network. In reality, the existing commercial networks of internet service providers are subject to certain technical difficulties and limitations in embodying those technologies. On-going research efforts involve the experimental engineering works and implementation experience to trigger IPTV service on the premium-level IP backbone which has recently been developed. This paper introduces the core network technologies that will enable the deployment of a high-quality IPTV service, and then proposes a suitable methodology for application and deployment policies on each technology to lead the establishment and globalization of the IPTV service.

Journal ArticleDOI
TL;DR: The application of AHSEN (Autonomic Healing-based Self management Engine) to in OKKAM Project infrastructure backbone cluster that mimics the web service based architecture of u-Zone gateway infrastructure shows that the self healing significantly improves the performance and clearly demarcates the logical ambiguities in contemporary designs of self healing infrastructures proposed for large scale computing infrastructureures.
Abstract: Self healing systems are considered as cognation-enabled sub form of fault tolerance system. But our experiments that we report in this paper show that self healing systems can be used for performance optimization, configuration management, access control management and bunch of other functions. The exponential complexity that results from interaction between autonomic systems and users (software and human users) has hindered the deployment and user of intelligent systems for a while now. We show that if that exceptional complexity is converted into self-growing knowledge (policies in our case), can make up for initial development cost of building an intelligent system. In this paper, we report the application of AHSEN (Autonomic Healing-based Self management Engine) to in OKKAM Project infrastructure backbone cluster that mimics the web service based architecture of u-Zone gateway infrastructure. The ‘blind’ load division on per-request bases is not optimal for distributed and performance hungry infrastructure such as OKKAM. The approach adopted assesses the active threads on the virtual machine and does resource estimates for active processes. The availability of a certain server is represented through worker modules at load server. Our simulation results on the OKKAM infrastructure show that the self healing significantly improves the performance and clearly demarcates the logical ambiguities in contemporary designs of self healing infrastructures proposed for large scale computing infrastructures.

Journal ArticleDOI
TL;DR: Simulation results show that the Differentiated Services Based Admission Control and Routing Algorithm for IPv6 provides an excellent packet delivery ratio, reduces the control packets’ overhead, and makes use of the resources present on multiple paths to the destination network, while almost each admitted flow shows compliance with its Service Level Agreement.
Abstract: In this paper we propose a Differentiated Services Based Admission Control and Routing Algorithm for IPv6 (ACMRA). The basic DiffServ architecture lacks an admission control mechanism, the injection of more QoS sensitive traffic into the network can cause congestion at the core of the network. Our Differentiated Services Based Admission Control and Routing Algorithm for IPv6 combines the admission control phase with the route finding phase, and our routing protocol has been designed in a way to work alongside DiffServ based networks. The Differentiated Services Based Admission Control and Routing Algorithm for IPv6 constructs label switched paths in order to provide rigorous QoS provisioning. We have conducted extensive simulations to validate the effectiveness and efficiency of our proposed admission control and routing algorithm. Simulation Results show that the Differentiated Services Based Admission Control and Routing Algorithm for IPv6 provides an excellent packet delivery ratio, reduces the control packets’ overhead, and makes use of the resources present on multiple paths to the destination network, while almost each admitted flow shows compliance with its Service Level Agreement.

Journal ArticleDOI
TL;DR: This paper uses a two-phase Scatternet Formation Algorithm and treats the devices differently not only based on the hardware characteristics, but also considering other conditions like different classes, different groups and so on.
Abstract: Nowadays, it has become common to equip a device with Bluetooth. As such devices become pervasive in the world; much work has been done on forming them into a network, however, almost all the Bluetooth Scatternet Formation Algorithms assume devices are homogeneous. Even the exceptional algorithms barely mentioned a little about the different characteristics of devices like computational abilities, traffic loads for special nodes like bridge nodes or super nodes, which are usually the bottleneck in the scatternet. In this paper , we treat the devices differently not only based on the hardware characteristics, but also considering other conditions like different classes, different groups and so on. We use a two-phase Scatternet Formation Algorithm here: in the first phase, construct scatternets for a specified kind of devices; in the second phase, connect these scatternets by using least other kinds of devices as bridge nodes. Finally, we give some applications to show the benefit of classification.

Journal ArticleDOI
TL;DR: A model that dynamically controls the RCS worm using the characteristics of Power-Law and depth distribution of the delivery node, which is commonly seen in preferential growth networks is suggested.
Abstract: Ever since the network-based malicious code commonly known as a `worm` surfaced in the early part of the 1980`s, its prevalence has grown more and more. The RCS (Random Constant Spreading) worm has become a dominant, malicious virus in recent computer networking circles. The worm retards the availability of an overall network by exhausting resources such as CPU capacity, network peripherals and transfer bandwidth, causing damage to an uninfected system as well as an infected system. The generation and spreading cycle of these worms progress rapidly. The existing studies to counter malicious code have studied the Microscopic Model for detecting worm generation based on some specific pattern or sign of attack, thus preventing its spread by countering the worm directly on detection. However, due to zero-day threat actualization, rapid spreading of the RCS worm and reduction of survival time, securing a security model to ensure the survivability of the network became an urgent problem that the existing solution-oriented security measures did not address. This paper analyzes the recently studied efficient dynamic network. Essentially, this paper suggests a model that dynamically controls the RCS worm using the characteristics of Power-Law and depth distribution of the delivery node, which is commonly seen in preferential growth networks. Moreover, we suggest a model that dynamically controls the spread of the worm using information about the depth distribution of delivery. We also verified via simulation that the load for each node was minimized at an optimal depth to effectively restrain the spread of the worm.