scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Networks in 2010"


Journal ArticleDOI
TL;DR: A survey of coverage problem for wireless sensor network (WSN), and besides some basic design considerations in coverage of WSN, two challenges are described, namely, maximizing network lifetime and network connectivity.
Abstract: Wireless sensor networks constitute the platform of a broad range of applications related to national security, surveillance, military, health care, and environmental monitoring. The coverage of WSN has answered the questions about quality of service (surveillance) which can be provided by WSN. Therefore, maximizing coverage using the resource constrained nodes is a non-trivial problem. The coverage problem for wireless sensor network (WSN) has been studied extensively in recent years, especially when combined with connectivity and energy efficiency. In this paper we present a survey of coverage problem. And besides some basic design considerations in coverage of WSN we describe two challenges, namely, maximizing network lifetime and network connectivity. We also provide a brief summary and comparison of existing coverage schemes.

163 citations


Journal ArticleDOI
TL;DR: The attraction of the QR - code technique can be introduced into the one-time password authentication protocol and not only eliminates the usage of the password verification table, but also is a cost effective solution since most internet users already have mobile phones.
Abstract: User authentication is one of the fundamental procedures to ensure secure communications and share system resources over an insecure public network channel. Thus, a simple and efficient authentication mechanism is required for securing the network system in the real environment. In general, the password-based authentication mechanism provides the basic capability to prevent unauthorized access. Especially, the purpose of the one-time password is to make it more difficult to gain unauthorized access to restricted resources. Instead of using the password file as conventional authentication systems, many researchers have devoted to implement various one-time password schemes using smart cards, time-synchronized token or short message service in order to reduce the risk of tampering and maintenance cost. However, these schemes are impractical because of the far from ubiquitous hardware devices or the infrastructure requirements. To remedy these weaknesses, the attraction of the QR - code technique can be introduced into our one-time password authentication protocol. Not the same as before, the proposed scheme based on QR code not only eliminates the usage of the password verification table, but also is a cost effective solution since most internet users already have mobile phones. For this reason, instead of carrying around a separate hardware token for each security domain, the superiority of handiness benefit from the mobile phone makes our approach more practical and convenient.

108 citations


Journal ArticleDOI
TL;DR: The results show that the LDB scheme can achieve a high localization accuracy, even in sparse networks, and is evaluated by simulations.
Abstract: In this paper, we propose a novel distributed localization scheme LDB, a 3D localization scheme with directional beacons for Underwater Acoustic Sensor Networks (UWA-SNs). LDB localizes sensor nodes using an Autonomous Underwater Vehicle (AUV) as a mobile beacon sender. Mounted with a directional transceiver which creates conical shaped directional acoustic beam, the AUV patrols over the 3D deployment volume with predefined trajectory sending beacons with constant interval towards the sensor nodes. By listening two or more beacons sent from the AUV, the nodes can localize themselves silently. Through theoretical analysis, we provide the upper bound of the estimation error of the scheme. We also evaluate the scheme by simulations and the results show that our scheme can achieve a high localization accuracy, even in sparse networks.

108 citations


Journal ArticleDOI
TL;DR: The lifetime of a wireless sensor network is classified into different types and the corresponding CH selection method is given to achieve the life-time extension objective.
Abstract: The past few years have witnessed increased in the potential use of wireless sensor network (WSN) such as disaster management, combat field reconnaissance, border protection and security surveillance. Sensors in these applications are expected to be remotely deployed in large numbers and to operate autonomously in unattended environments. Since a WSN is composed of nodes with nonreplenishable energy resource, elongating the network lifetime is the main concern. To support scalability, nodes are often grouped into disjoint clusters. Each cluster would have a leader, often referred as cluster head (CH). A CH is responsible for not only the general request but also assisting the general nodes to route the sensed data to the target nodes. The power-consumption of a CH is higher then of a general (non-CH) node. Therefore, the CH selection will affect the lifetime of a WSN. However, the application scenario contexts of WSNs that determine the definitions of lifetime will impact to achieve the objective of elongating lifetime. In this study, we classify the lifetime into different types and give the corresponding CH selection method to achieve the life-time extension objective. Simulation results demonstrate our study can enlarge the life-time for different requests of the sensor networks.

51 citations


Journal ArticleDOI
TL;DR: The simulation results demonstrate that modified routing protocols can effectively detect malicious nodes and mitigate their attacks and evaluate the performance of those trust routing protocols by comparing the simulation results with and without the proposed trust mechanism.
Abstract: Node misbehavior due to selfish or malicious intention could significantly degrade the performance of MANET because most existing routing protocols in MANET are aiming at finding most efficiency path. To deal with misbehavior in MANET, an incentive mechanism should be integrated into routing decision-making. In this paper firstly we review existing techniques for secure routing, and then propose to use trust vector model based routing protocols. Each node would evaluate its own trust vector parameters about neighbors through monitoring neighbors’ pattern of traffic in network. At the same time, trust dynamics is included in term of robustness. Then we integrated trust model into Dynamic Source Routing (DSR) and Ad-hoc On Demand Distance Vector (AODV) which are most typical routing protocols in MANET. We evaluate the performance of those trust routing protocols by compar ing the simulation results of with and without the proposed trust mechanism . The simulation results demonstrate that modified routing protocols can effectively detect malicious nodes and mitigate their attacks.

47 citations


Journal ArticleDOI
TL;DR: By checking the iterative matrix and the serial number matrix, the shortest path can be found out simply, intuitively and efficiently.
Abstract: There is too much iteration when us ing the traditional Floyd algorithm to calculat e the shortest path , that is to say, the computation of traditional Floyd algorithm is too large for an urban road network with a large number of nodes. As f or this disadvantage, the Floyd algorithm was improved and optimized further in this study; moreover , the improved Floyd algorithm was used to solve the shortest path problem in car navigation system, and it achieved good results. Two improvements were done: firstly, construct an iterative matrix for solving the shortest path, all the nodes in it were compared at first, and those nodes which have nothing to do with the results were removed, then search for the next node directly to reduce the frequency of iteration; Secondly, construct a serial number matrix for solving the shortest path, it was used to record the case of inserting node in the process of iteration. Finally, the frequency of iteration and the computational complexity were compared for both the traditional Floyd algorithm and the improved Floyd algorithm. The experimental results and analysis showed that the computational complexity of the improved Floyd algorithm has reduced to half of that of the traditional algorithm. What ’ s more, by checking the iterative matrix and the serial number matrix, the shortest path can be found out simply, intuitively and efficiently.

45 citations


Journal ArticleDOI
TL;DR: In low-density network scenarios, it is observed that the network connectivity under the Gauss-Markov model is significantly lower than that obtained under the Random Waypoint model, while the connectivity of moderate and high-density networks is not significantly dependent on the degree of randomness parameter.
Abstract: The high-level contribution of this paper is a simulation based analysis of the network connectivity, hop count and lifetime of the routes determined for mobile ad hoc networks (MANETs) using the Gauss-Markov mobility model. The Random Waypoint mobility model is used as a benchmark in the simulation studies. Two kinds of routes are determined: routes with the longest lifetime (stable paths) and routes with the minimum hop count. Extensive simulations have been conducted for different network density, node mobility values and different values of the degree of randomness parameter for the Gauss-Markov model. In low-density network scenarios, we observe that the network connectivity under the Gauss-Markov model is significantly lower than that obtained under the Random Waypoint model. In moderate and high density network scenarios, the network connectivity obtained under the two mobility models is almost equal. The minimum hop paths determined under the Gauss-Markov model have a larger number of hops than those computed under the Random Waypoint model. The lifetime of stable paths determined under the Gauss-Markov model is smaller than those determined under the Random Waypoint model. Low-density networks using the Gauss-Markov mobility model attain larger connectivity for intermediate values of the degree of randomness parameter, while the connectivity of moderate and high-density networks is not significantly dependent on the degree of randomness parameter. The minimum hop count of the paths is not much affected by different values of the degree of randomness parameter, while maximum lifetime stable paths are obtained for larger intermediate values of the degree of randomness parameter, but not for unity.

43 citations


Journal ArticleDOI
TL;DR: A cooperative network intrusion detection system with multi-agent architecture composed of three types of agents corresponding to TCP, UDP, and ICMP protocol s, respectively, is presented.
Abstract: T he re is a large number of noise in the data obtained from network , which deteriorates intrusion detection performance. To delete the noise data, data preprocess ing is done before the constructi on of hyperplane in support vector machine ( SVM ) . By introduc ing fuzzy theory into SVM, a new method is proposed for network i ntrusion detection. Because the attack behavior is different for different network protocol , a different fuzzy membership function is formatted, such that for each class of protocol there is a SVM. To implement this approach, a f uzzy SVM -based cooperative network intrusion detection system with multi-agent architecture is presented. It is composed of three types of agents corresponding to TCP, UDP , and ICMP protocol s, respectively. Simulation experiment s are done by using KDD CUP 1999 data set , results show that the training time is significantly shortened, storage space requirement is reduced, and classification accuracy is improved.

43 citations


Journal ArticleDOI
TL;DR: This paper proposes an isolation table to detect intrusion by hierarchical wireless sensor networks and to estimate the effect of intrusion detection, and proves that isolation table intrusion detection can prevent attacks effectively.
Abstract: Normal 0 0 2 false false false MicrosoftInternetExplorer4 A wireless sensor network (WSN) is a wireless network consisting of spatially distributed autonomous devices using sensors to cooperatively monitor environmental conditions, such as battlefield data and personal health information, and some environment limited resources. T o avoid malicious damage is important while information is transmitted in wireless network. Thus, Wireless Intrusion Detection Systems are crucial to safe operation in wireless sensor networks. Wireless networks are subject to very different types of attacks compare to wired networks. In this paper, we propose an isolation table to detect intrusion by hierarchical wireless sensor networks and to estimate the effect of intrusion detection. The primary experiment proves that isolation table intrusion detection can prevent attacks effectively.

38 citations


Journal ArticleDOI
TL;DR: A new Bayesian fusion algorithm to combine more than one trust component (data trust and communication trust) to infer the overall trust between nodes to demonstrate that a node is highly trustworthy provided that both trust components simultaneously confirm its trustworthiness and conversely, a nodes is highly untrustworthy if its untrustworthiness is asserted by both components.
Abstract: This paper introduces a new Bayesian fusion algorithm to combine more than one trust component (data trust and communication trust) to infer the overall trust between nodes. This research work proposes that one trust component is not enough when deciding on whether or not to trust a specific node in a wireless sensor network. This paper discusses and analyses the results from the communication trust component (binary) and the data trust component (continuous) and proves that either component by itself, can mislead the network and eventually cause a total breakdown of the network. As a result of this, new algorithms are needed to combine more than one trust component to infer the overall trust. The proposed algorithm is simple and generic as it allows trust components to be added and deleted easily. Simulation results demonstrate that a node is highly trustworthy provided that both trust components simultaneously confirm its trustworthiness and conversely, a node is highly untrustworthy if its untrustworthiness is asserted by both components.

36 citations


Journal ArticleDOI
TL;DR: The scalable approach to solve the problem of removing vulnerabilities in such a way that given critical resources cannot be compromised while the cost for such removal incurs the least cost is proposed.
Abstract: The compact attack graphs implicitly reveal the threat of sophisticated multi-step attacks by enumerating possible sequences of exploits leading to the compromising given critical resources in enterprise networks with thousands of hosts. For security analysts, the challenge is how to analyze the complex attack graphs with possible ten thousands of nodes for defending the security of network. In the paper, we will essentially discuss three issues about it. The first is to compute non-loop attack paths with the distance less than the given number that the real attacker may take practically in realistic attack scenarios. The second is how to measure security risk of the given critical resources. The third is to find the solution to removing vulnerabilities in such a way that given critical resources cannot be compromised while the cost for such removal incurs the least cost. We propose the scalable approach to solve the above three issues respectively. The approach is proved to have a polynomial time complexity and can scale to the attack graphs with ten thousands of nodes corresponding large enterprise networks.

Journal ArticleDOI
TL;DR: Simulation results show that a routing protocol for further energy control can remarkably increase the life-span of network with lower energy consumption when compared to the routing scheme according to the shortest path and without energy control mode.
Abstract: Battery energy is a rare resource in MANET and it often affects the communication activities in network. In this paper, we first present an energy management model, in which each node can transfer its state between power-save mode and active mode. Based on such model, we propose a routing protocol for further energy control. In the protocol, a new routing function dealt with both MAC layer and network layer is defined. It can dynamically adjust transmission power of nodes for per-hop energy saving and, also consider the residual energy of node for balancing traffic load to achieve overall energy efficiency. Among the feasible paths, the one with the maximal value of joint function will be chosen as the optimal route for data transportation. Simulation results show that such protocol can remarkably increase the life-span of network with lower energy consumption when compared to the routing scheme according to the shortest path and without energy control mode.

Journal ArticleDOI
TL;DR: A novel method for handling IDS alerts more efficiently is presented, which introduces a new data mining technique, outlier detection, into this field, and designs a specialOutlier detection algorithm for identifying true alerts and reducing false positives (i.e. alerts that are triggered incorrectly by benign events).
Abstract: Current system managers often have to process huge amounts of alerts per day, which may be produced by all kinds of security products, network management tools or system logs. This has made it extremely difficult for managers to analyze and react to threats and attacks. So an effective technique which can automatically filter and analyze alerts has become urgent need. This paper presents a novel method for handling IDS alerts more efficiently. It introduces a new data mining technique, outlier detection, into this field, and designs a special outlier detection algorithm for identifying true alerts and reducing false positives (i.e. alerts that are triggered incorrectly by benign events). This algorithm uses frequent attribute values mined from historical alerts as the features of false positives, and then filters false alerts by the score calculated based on these features. We also proposed a three-phrase framework, which not only can filter newcome alerts in real time, but also can learn from these alerts and automatically adjust the filtering mechanism to new situations. Moreover our method can help managers analyze the root causes of false positives. And our method needs no domain knowledge and little human assistance, so it is more practical than current ways. We have built a prototype implementation of our method. Through the experiments on DARPA 2000, we have proved that our model can effectively reduce false positives. And on real-world dataset, our model has even higher reduction rate. By comparing with other alert reduction methods, we believe that our model has better performance.

Journal ArticleDOI
TL;DR: The architecture is suitable for optical packet and optical burst switching and provides appropriate contention resolution schemes and QoS guarantees and a concept, called virtual memory, is developed to allow controllable and reasonable periods for delaying optical traffics.
Abstract: This paper presents an architecture for an all optical switching node. The architecture is suitable for optical packet and optical burst switching and provides appropriate contention resolution schemes and QoS guarantees. A concept, called virtual memory, is developed to allow controllable and reasonable periods for delaying optical traffics. Related to its implementation, several engineering issues are discussed, including the use of loopbased optical delay lines, fiber Bragg gratings, and limited number of signal amplifications. In particular, two implementations using optical flip-flop and laser neuron network based control units are analyzed. This paper also discusses the implementation and performance of an alloptical synchronizer that is able to synchronize arriving data units to be aligned on the clock signal associated with the beginning time of slots, in the node, with an acceptable error.

Journal ArticleDOI
TL;DR: This paper proposes a novel approach that can monitor the botnet activities in an online way by computing the average Euclidean distance for similarity measurement and defines the concept of “feature streams” to describe raw network traffic.
Abstract: Botnet detection has attracted lots of attention since botnet attack is becoming one of the most serious threats on the Internet. But little work has considered the online detection. In this paper, we propose a novel approach that can monitor the botnet activities in an online way. We define the concept of “feature streams” to describe raw network traffic. If some feature streams show high similarities, the corresponding hosts will be regarded as suspected bots which will be added into the suspected bot hosts set. After activity analysis, bot hosts will be confirmed as soon as possible. We present a simple method by computing the average Euclidean distance for similarity measurement. To avoid huge calculation among feature streams, classical Discrete Fourier Transform (DFT) technique is adopted. Then an incremental calculation of DFT coefficients is introduced to obtain the optimal execution time. The experimental evaluations show that our approach can detect both centralized and distributed botnet activities successfully with high efficiency and low false positive rate.

Journal ArticleDOI
TL;DR: The combination model of Analytic Hierarchy Process with BP Neural Network model (AHP-BPNN) is effective and showed that the combination model is effective in terms of assessment efficiency and effectiveness.
Abstract: In view of the existing problems of investment risks assessment on high-tech industry projects such as a lack of systematic, with too much subjectivity and from the point to improve assessment efficiency and effectiveness, the paper combined Analytic Hierarchy Process (AHP) with BP Neural Network to establish a new and suitable risk assessment model of high-tech projects. Firstly, we applied AHP to construct a comprehensive risk assessment index system and screened the assessment indexes according to their weights. On this basis, using MATLAB software with BP Neural Network model, we carried out example simulations and results were analyzed. The results showed that the combination model of Analytic Hierarchy Process with BP Neural Network model (AHP-BPNN) is effective.

Journal ArticleDOI
TL;DR: A modified algorithm of Low Energy Adaptive Clustering Hierarchy (LEACH) protocol which is a well known energy efficient clustering algorithm for WSNs is proposed, aimed at prolonging the lifetime of a sensor network by balancing energy usage of the nodes.
Abstract: Wireless sensor networks (WSNs) consist of a large number of small, low data rate and inexpensive nodes that communicate in order to sense or control a physical phenomenon. The major difference between the WSN and the traditional wireless network is that sensors are very sensitive to energy consumption. Moreover, the performance of the sensor network applications highly depends on the lifetime of the network and we expect the lifetime of several months to several years. Thus, energy saving is crucial in designing long-lived wireless sensor networks. Many researchers have focused on developing energy efficient cluster based protocols for WSNs, but there has not been much research on event driven WSNs and, their focus is on continuous driven networks. In this paper, we propose a modified algorithm of Low Energy Adaptive Clustering Hierarchy (LEACH) protocol which is a well known energy efficient clustering algorithm for WSNs. Our modified protocol called “Adaptive and Energy Efficient Clustering Algorithm for Event-Driven Application in Wireless Sensor Networks (AEEC)” is aimed at prolonging the lifetime of a sensor network by balancing energy usage of the nodes. AEEC makes the nodes with more residual energy have more chances to be selected as cluster head. Also, we use elector nodes which take the responsibility of collecting energy information of the nearest sensor nodes and selecting the cluster head. We compared the performance of our AEEC algorithm with the LEACH protocol using simulations .

Journal ArticleDOI
TL;DR: The next hop in route planning for mobile agents is determined by the residual energy, the path loss and the stimulated intensity, and the theory analysis and simulation results show that mobile-agent-based model has a better performance in energy consumption and network delay compared to C/S model.
Abstract: In order to improve energy efficiency and decrease network delay in wireless sensor network applied to emergent event monitoring, a new data gathering algorithm based on mobile agent and event-driven is proposed for cluster-based wireless sensor network The process of dynamically clustering the sensor nodes is based on the event severity degree, by which the scale and lifetime of clusters are determined And a multi-hop virtual cluster is formed between the base station and the cluster heads in which the base station is regarded as its cluster head The order of nodes visited along the route by mobile agent has a significant impact on the algorithm efficiency and the lifetime for wireless sensor network In this paper, the next hop in route planning for mobile agents is determined by the residual energy, the path loss and the stimulated intensity The mobile agents can gather information by traversing all member nodes The theory analysis and simulation results show that mobile-agent-based model has a better performance in energy consumption and network delay compared to C/S model And mobile agent is more suitable for wireless sensor network than C/S model in data aggregation Furthermore, DGMA will provide a more network applied to a large scale emergent event monitoring

Journal ArticleDOI
TL;DR: Channel estimation is an essential problem in OFDM system and pilot-aided channel estimation has been used; a good choice of the pilot pattern should match the channel behavior both in time and frequency domains.
Abstract: Orthogonal Frequency Division Multiplexing (OFDM) has been recently applied widely in wireless communication systems, due to its high data rate, transmission capability with high bandwidth, efficiency and its robustness to multipath delay Channel estimation is an essential problem in OFDM system Pilot-aided channel estimation has been used; a good choice of the pilot pattern should match the channel behavior both in time and frequency domains We explored comb pilot arrangements The advantage for comb type pilots arrangement in channel estimation is the ability to track the variation of the channel caused by doppler frequency, it is observed that the doppler effect can be reduced, and so this will increase the system mobility Kalman and Least Square (LS) estimators have been proposed to estimate the Channel Frequency Response (CFR) at the pilots location, then CFR at data sub channels are obtained by mean of interpolation between estimates at pilot locations Different types of interpolations have been used such as; low pass interpolation; spline cubic interpolation and linear interpolation Kalman estimation has better performance than LS estimation The estimators perform about the same for SNR lower than 10 dB The performances of all schemes have been compared by finding Bit Error Rate (BER), where BPSK modulation scheme was used

Journal ArticleDOI
TL;DR: It will be shown that a dynamically adaptive system can greatly enhance performance when compared to static operation, and how a loading algorithm needs to be adjusted if the specific terms of the optical wireless channel are to be rigorously obeyed.
Abstract: We propose a rate-adaptive optical wireless transmission system based on orthogonal frequency division multiplexing for indoor communications. The investigations rely on realistic parameters of the key system components and focus on throughput maximization. We will show that a dynamically adaptive system can greatly enhance performance when compared to static operation, and how a loading algorithm, which optimally performs in powerlimited systems, needs to be adjusted if the specific terms of the optical wireless channel are to be rigorously obeyed. Our investigations include scenarios in which the non-negativity constraint on the optical source driving signal is strictly met and in which a certain amount of symmetric clipping is tolerated. In the latter case, the system can be regarded as power-limited and conventional loading algorithms are hence the most suitable. We will show that the transmission rate can be significantly improved even further by accepting a minor increase in the error rate as a result of controlled clipping, and we will compare our results with the upper system capacity limit.

Journal ArticleDOI
TL;DR: The model is effective and highly accurate in the forecasting of short-term power load than the other models and to prove the effectiveness of the model, single SVM algorithm and BP network was used to compare with the result of PSOGA-SVM.
Abstract: In this paper, we propose Hybrid Particle Swarm Optimization (HPSO) with genetic algorithm(GA) mutation to optimize the SVM forecasting model. In the process of doing so, we first use HPSO with genetic algorithm to make pretreatment of the data. PSO with GA model is a method for finding a solution of stochastic global optimizer based on swarm intelligence. Using the interaction of particles, PSOGA model searches the solution space intelligently, this will find out the best one and reduce the redundant information. The PSOGA-SVM method proposed in this paper is based on the global optimization of PSOGA and local accurate searching of SVM. And to prove the effectiveness of the model, single SVM algorithm and BP network was used to compare with the result of PSOGA-SVM. The results show that the model is effective and highly accurate in the forecasting of short-term power load than the other models. The root-mean-square relative error (RMSRE) of new model is only 1.82%, the SVM model and BP network is 2.43%, 4.10% separately.

Journal ArticleDOI
TL;DR: This paper presents a novel Java-based OBS network simulator called JAVOBS, which discusses its architecture, study its performance and provides some exemplary results that point out its remarkable flexibility.
Abstract: Since the OBS paradigm has become a potential candidate to cope with the needs of the future all optical networks, it has really caught the attention from both academia and industry worldwide. In this direction, OBS networks have been investigated under many different scenarios comprising numerous architectures and strategies. This heterogeneous context encouraged the development of various simulation tools. In this paper we present our novel Java-based OBS network simulator called JAVOBS. We discuss its architecture, study its performance and provide some exemplary results that point out its remarkable flexibility. This flexibility should permit an easy integration of upcoming new network protocol designs but also support changing and evolving research goals.

Journal ArticleDOI
TL;DR: This paper focuses on the path selection problem in delay-guaranteed sensor networks with a path-constrained mobile sink, and formulate the respective optimization problems and present corresponding practical algorithms.
Abstract: Recent work shows sink mobility can improve the energy efficiency in wireless sensor networks (WSNs). However, data delivery latency often increases due to the speed limit of mobile sink. In practice, some application has strict requirements on delay. This paper focuses on the path selection problem in delay-guaranteed sensor networks with a path-constrained mobile sink. The optimal path is chosen to meet the requirement on delay as well as minimize the energy consumption of entire network. According to whether data aggregation is adopted, we formulate the respective optimization problems and present corresponding practical algorithms. Theoretical analysis and simulation experiments validate the effectiveness of the proposed formulations and algorithms by comparing the energy consumption with some baseline algorithms.

Journal ArticleDOI
TL;DR: An accurate and efficient localization algorithm, called iterative multilateral localization algorithm based on time rounds, which reduces location errors and prevents abnormal phenomena caused by trilateral localization through limiting the minimum number of neighboring beacon nodes used in different localizing time rounds.
Abstract: In many applications of wireless sensor networks, location is very important information. It can be used to identify the location at which sensor readings originate, in routing and data storage protocols based on geographical areas and so on. Location information can come from manual setting or GPS device. However, manual setting requires huge cost of human time, and GPS setting requires expensive device cost. Both approaches are not applicable to localization task of large scale wireless sensor networks. In this paper, we propose an accurate and efficient localization algorithm, called iterative multilateral localization algorithm based on time rounds. This Algorithm uses time round mechanism and anchor nodes triangle placement scheme to reduce error accumulation caused by iteratively localization. And it also reduces location errors and prevents abnormal phenomena caused by trilateral localization through limiting the minimum number of neighboring beacon nodes used in different localizing time rounds. Experimental results reveal that this algorithm has high localization accuracy, even if in large range errors, it can achieve good result.

Journal ArticleDOI
TL;DR: A generalized Temporal and Spatial RBAC (TSRBAC) model is proposed, which uses temporal-period and spatial-location based entities to constrain the permissions of objects, user positions, and geographically bounded roles in the TSRBAC model.
Abstract: Securing access to data, applied to mobile service applications with temporal and spatial controlling, requires constructing innovative definitions with temporal and spatial limitations for an access-control system. To cope with the temporal and spatial requirements, we propose a generalized Temporal and Spatial RBAC (TSRBAC) model. In the TSRBAC model, temporal-period and spatial-location based entities are used to constrain the permissions of objects, user positions, and geographically bounded roles. Furthermore, we also present temporal and spatial relations of Temporal and Spatial Separation of Duties (TSSSD), Temporal and Spatial Dynamic Separation of Duties (TSDSD) constraints in the TSRBAC model.

Journal ArticleDOI
TL;DR: The key management functions that are responsible for ensuring the secure and continued functioning of the network are discussed and a framework for the realization of an appropriate management system that can meet the challenges posed by all-optical networks is presented.
Abstract: As more intelligence and control mechanisms are added into optical networks, the need for the deployment of a reliable and secure management system using efficient control techniques has become increasingly relevant. While some of available control and management methods are applicable to different types of network architectures, many of them are not adequate for all-optical networks. These emerging transparent optical networks have particularly unique features and requirements in terms of security and quality of service thus requiring a much more targeted approach in terms of network management. In particular, the peculiar behavior of all-optical components and architectures bring forth a new set of challenges for network security. In this article, we briefly overview security and management issues that arise in all-optical networks. We then discuss the key management functions that are responsible for ensuring the secure and continued functioning of the network. Consequently, we present a framework for the realization of an appropriate management system that can meet the challenges posed by all-optical networks .

Journal ArticleDOI
TL;DR: A practical efficient receiver deniable encryption scheme based on BCP commitment scheme and idea of Klonowski et al is proposed, which is a one-move scheme without any pre-encryption information required to be sent between the sender and the receiver prior to encryption.
Abstract: Deniable encryption is an important cryptographic primitive, essential in all cryptographic protocols where a coercive adversary comes to play with high potential. Deniable encryption plays a key role in the internet/electronic voting, electronic bidding, electronic auctions and secure multiparty computation. In this study a practical efficient receiver deniable encryption scheme based on BCP commitment scheme and idea of Klonowski et al . is proposed. The proposed scheme is a one-move scheme without any pre-encryption information required to be sent between the sender and the receiver prior to encryption. Moreover, the overhead is low in term of the size of the ciphertext. At the same time we compare the typical deniable encryption schemes with our proposed scheme. Finally, applying the proposed deniable encryption, we originally give a coercion-resistant internet voting model without physical assumptions. We also compare the typical internet voting protocols with our proposed model.

Journal ArticleDOI
TL;DR: In this study, a fault tolerance technology is proposed in order that the system autonomously detects and recovers the fault of the mobile agent due to a failure in a transmission link.
Abstract: The reliable execution of a mobile agent is a very important design issue to build a mobile agent system and many fault-tolerant schemes have been proposed. Hence, in this paper, we present evaluation of the performance of the fault-tolerant schemes for the mobile agent environment. Our evaluation focuses on the checkpointing schemes and deals with the cooperating agents. We derive the FANTOMAS (Fault-Tolerant approach for Mobile Agents) design which offers a user transparent fault tolerance that can be activated on request, according to the needs of the task. A theoretical analysis examines the advantages and drawbacks of Fault-Tolerant approach for Mobile Agents . The use of mobile agent, however, is critical and requires reliability in regard to mobile agent failures that may lead to bad response time and hence the availability of the system may lost. In this study, a fault tolerance technology is proposed in order that the system autonomously detects and recovers the fault of the mobile agent due to a failure in a transmission link. Also discuss how transactional agent with types of commitment constraints can commit. Furthermore this paper proposes a solution for effective agent deployment using dynamic agent domains.

Journal ArticleDOI
TL;DR: A comparative analysis of the power consumption of optical cross-connect (OXC) equipment based on electrical and /or photonic matrix switching, which will be used for future large-capacity optical networks, is described.
Abstract: We describe a comparative analysis of the power consumption of optical cross-connect (OXC) equipment based on electrical and /or photonic matrix switching, which will be used for future large-capacity optical networks. The switch configurations used for both types of OXCs are also comprehensively discussed. For optical networks that accommodate traffic with a capacity of several Tb/s, the power consumption of the OXC equipment based on electrical switching could reach more than 50 kW; this can be reduced to 8 – 30 kW if photonic switching is used instead . In photonic switching-based OXC equipment, the power consumption of transponders becomes the most significant, and its reduction is the key issue for power-efficient large-capacity OXC equipment.

Journal ArticleDOI
TL;DR: A new token based Fairness Algorithm for Priority Processes (FAPP) that addresses both the issues and keeps the control message traffic reasonably low and ensures liveness in terms of token requests from low priority processes.
Abstract: In this work, we have proposed a new token based Fairness Algorithm for Priority Processes (FAPP) that addresses both the issues and keeps the control message traffic reasonably low. One major limitation of the token based mutual exclusion algorithms for distributed environment like Raymond’s well-known work on invertedtree topology lies in the lack of fairness. Another aspect of the problem is in handling the prioritized processes. In one of our earlier works, both fairness and priority have been addressed in proposing an algorithm MRA-P. However, MRA-P suffered from some major shortcomings like lack of liveness, high message complexity, etc. The proposed FAPP algorithm, in spite of considering priority of processes, ensures liveness in terms of token requests from low priority processes. Formal verification on FAPP justify properties like correctness of the algorithm, low message complexity, and fairness in token allocation.