scispace - formally typeset
Search or ask a question

Showing papers in "Networking and Communication Engineering in 2011"


Journal Article
TL;DR: The design and analysis of an adaptive reliable multipath centralized routing algorithm with two central management systems (CMS-1 and CMS-2) which considers load heuristic for selecting the optimal and backup paths and it was shown that the proposed protocol has low set up time and blocking.
Abstract: Balancing the load among multiple paths leads to the degradation in the all-optical WDM network. It is always desirable to deliver the entire load on a single optimal path depending upon the current network-wide load status. For this case, multiple light paths need to be maintained to provide the choice of selecting the best light path, based on the changing traffic load conditions, This paper presents the design and analysis of an adaptive reliable multipath centralized routing algorithm (ARMCR) with two central management systems (CMS-1 and CMS-2) which considers load heuristic for selecting the optimal and backup paths. Initially in this algorithm when a request comes to a source it sends the information to CMS-1 for allocating the optimal primary and backup paths for the received request, depending on the number of available free wavelengths. With the assigned primary path, source starts clearing the request meanwhile CMS-2 keeps track of the changes in the available free wavelengths and failure of all the paths. In the assigned primary path if the available free wavelength number comes below the threshold number ‗N‘ and at the same time if any other path is available with more number of free wavelengths, then CMS-2 assigns this path as primary path and very next more number available free wavelengths path is assigned as backup path. The advantage of this algorithm is that it reduces the set up time by using two CMS which share the work and also if one fails the other will take over the entire process. As opposed to the reactive protocols, our proposed protocol is proactive in the sense that it avoids the chances of blocking. Furthermore, the approach is self-regulating, it automatically adapts to the traffic load variation across the network. By simulation results, we showed that our proposed protocol has low set up time and blocking.

5 citations


Journal Article
TL;DR: This paper proposes a Two-tier Overlay Multicast Architecture (TOMA) to provide scalable and efficient multicast support for various group communication applications and suggests several provisioning algorithms to locate proxies, select overlay links, and allocate link bandwidth.
Abstract: In this paper, we propose a Two-tier Overlay Multicast Architecture (TOMA) to provide scalable and efficient multicast support for various group communication applications. In TOMA, Multicast Service Overlay Network (MSON) is advocated as the backbone service domain, while end users in access domains form a number of small clusters, in which an application-layer multicast protocol is used for the communication between the clustered end users. TOMA is able to provide efficient resource utilization with less control overhead, especially for large-scale applications. It also alleviates the state scalability problem and simplifies multicast tree construction and maintenance when there are large numbers of groups in the network. To help MSON providers efficiently plan backbone service overlay, we suggest several provisioning algorithms to locate proxies, select overlay links, and allocate link bandwidth. Extensive simulation studies demonstrate the promising performance of TOMA.

4 citations


Journal Article
TL;DR: This paper deals with the detailed review of QoS, Manycasting and OBS network, which uses Optical Burst Switched network to solve some of the QoS constraints to an extend.
Abstract: During the last few years the Internet has grown tremendously and has penetrated all aspects of everyday life. The network services can be classified according to their level of QoS constrains, which describes how tightly the service can be bound by specific bandwidth, delay, jitter, and loss characteristics. At the first sight, three basic levels of end to end QoS can be provided across a heterogeneous network: The best-effort service, differentiated service (also called soft QoS), and guaranteed service (also called hard QoS), where an absolute reservation of network resources for specific traffic is provided through bandwidth reservation mechanisms. Quality of Service (QoS) by managing the delay, delay variation (jitter), bandwidth, and packet loss parameters on a network becomes the secret to a successful end-to-end business solution. It is the set of techniques to manage network resources. Distribution applications such as video conferencing, distributed interactive simulations (DIS), Grid computing, storage area network (SAN), Distributed content distribution network (CDN) etc needs to transfer huge amount of data from a source to multiple number of designation. These types of data requires large amount of bandwidth, which is overcome with manycasting. The QoS constraints such as Physical layer impairments, delay, reliability, fault tolerance, data access latency, jitter etc., Optical Burst Switched (OBS) network is used to solve some of the QoS constraints to an extend such as delay, more bandwidth utilization than OCS network and Remove throughput limitation and Reduce processing requirements than OPS network. This paper deals with the detailed review of QoS, Manycasting and OBS network.

4 citations


Journal Article
TL;DR: In this article, a polysilicon nano-wire piezoresistor pressure sensor was fabricated by means of RIE (reactive ion etching) to enhance the sensitivity of the sensor.
Abstract: A polysilicon nano-wire piezoresistor pressure sensor was fabricated by means of RIE (reactive ion etching). This paper focuses the structural design and optimization of the MEMS Nano-wire piezoresistive pressure sensor to enhance the sensitivity. The polysilicon nanowire pressure sensor has 100x100nm2 cross section area and has a thickness about 10nm. The diaphragm in this work is made thinner than that of the conventional bulk silicon piezoresistive pressure sensors. The single polysilicon nano-wire piezoresistive pressure sensor was compared with the conventional bulk piezoresistive pressure sensor. Finite element method (FEM) is adopted to optimize the sensor parameters, such as the resistor location. The silicon nanowire under 340nm has a good piezoresistive effect. It was proposed that silicon nanowires has seven times the piezoresistive effect than the bulk silicon. The polysilicon nanowire is fabricated in such a way that it forms a bridge between the polysilicon diaphragm and the substrate. The fabricated polysilicon nanowire has high sensitivity of about 156 mV/V.KPa.

4 citations


Journal Article
TL;DR: This paper presents and comments on the concepts, methodologies, prevailing technologies, potential benefits and the challenges faced by NCW in modern warfare.
Abstract: Technological advances made in Network computing, Mobile computing, data transmissions, Sensors and integrated digital technologies for Intelligence, Surveillance and Reconnaissance have led to development of highly advanced battlefield weapons. The pace of the current and future battle is going to be so fast that there will be no time to wait for instructions and advice from the Commanding Officer. Added to this, is the complex nature of wars the defence forces are asked to handle, world over. These factors put together have revolutionized the way the wars are prosecuted. The shift clearly is towards what is commonly called as Network-Centric Warfare (NCW). This paper presents and comments on the concepts, methodologies, prevailing technologies, potential benefits and the challenges faced by NCW in modern warfare. The discussion in the paper is purely theoretical and academic in nature, with an aim to review and provide insight on this emerging field.

3 citations


Journal Article
TL;DR: The framework on which most P2P networks are built is examined, and from this, it is examined how attacks on P1P networks leverage the very essence of the networks itself: decentralization of resources and of control.
Abstract: This paper offers a survey of the emerging field of private peer-to-peer networks, which can be defined as Internet overlays in which the resources and infrastructure are provided by the users, and new users may only join by personal invitation. The last few years have seen rapid developments in this field, many of which have not previously been described in the research literature. In recent years, peer-to-peer (P2P) networks have soared in popularity in the form of file sharing applications. With this popularity come security implications and vulnerabilities. In this paper, we examining the framework on which most P2P networks are built, and from this, we examine how attacks on P2P networks leverage the very essence of the networks itself: decentralization of resources and of control. Additionally, we look at the privacy and usage attacks that arise in P2P networks as well as approaches that can be used address some of these issues.

3 citations


Journal Article
TL;DR: A secure and efficient remote user authentication scheme for multi-server environments that is a pattern classification system based on an artificial neural network and can withstand the replay attack.
Abstract: Conventional authentication schemes allow a serviceable server to authenticate the legitimacy of a remote login user. However, these schemes are not used for multi-server architecture environments. This paper presents a secure and efficient remote user authentication scheme for multi-server environments. This user authentication scheme is a pattern classification system based on an artificial neural network. In this scheme, the users only remember user identity and password to log in to various servers. Users can freely choose their password. Furthermore, the system is not required to maintain a verification table and can withstand the replay attack.

3 citations


Journal Article
TL;DR: This paper proposes a Frequency Hopped Spread Spectrum (FHSS) communication system using chaos and its synchronization, and the performances of chaotic sequences in multiple access communication are shown to be similar to that of PN sequences.
Abstract: This paper proposes a Frequency Hopped Spread Spectrum (FHSS) communication system using chaos and its synchronization. The spread spectrum technique of frequency hopping uses pseudorandom number (PN) generator to produce a random sequence of frequencies. A new class of signature sequences for use in Frequency Hopped Spread Spectrum communication systems is proposed. In arriving at these sequences, the theory of chaos has been used. The correlation properties of these chaotic sequences are similar to random white noise. The numbers and lengths of chaotic sequences are not restricted like m sequences. The performances of chaotic sequences in multiple access communication are shown to be similar to that of PN sequences. Furthermore, due to their noise like appearance, chaotic sequences outperform PN sequences in low probability of intercept. In this method the spreading sequences change from one bit to the next in very random like fashion, causing undesired interception to be very difficult. The synchronization of the system is also proposed.

2 citations


Journal Article
TL;DR: In this article, the authors proposed a better scanning mechanism to reduce the handoff latency in IEEE 802.11b based wireless networks, which reduces the time to find the closest and best AP among all neighbor APs.
Abstract: Presently IEEE 802.11b based wireless networks are being widely used in various fields like personal as well as business applications. Handoff is a critical issue in IEEE 802.11b based wireless networks. When a mobile node (MN) moves away from the range of its current access point (AP) it needs to perform a link layer handoff. This causes data loss and interruption in communication. According to IEEE 802.11b link layer2 (L2) handoff consists of three phases – scanning, authentication and re-association. Scanning process delay is 90% of the total handoff delay. So in this paper we propose a better scanning mechanism to reduce handoff latency. Using GPS we determine the direction of velocity of the MN as well as position. It reduces the time to find the closest and best AP among all neighbor APs. This process effectively reduces the handoff latency.

2 citations


Journal Article
TL;DR: This paper proposes a novel Optimal Path Energy Efficient Routing (OPEER) algorithm for WSNs, which manages uniform load distribution amongst the paths so as to improve the network performance as compared with the traditional energy efficient routing strategies.
Abstract: Wireless Sensor Networks (WSNs) have become one of the emerging trends of the modern communication systems Routing plays a vital role in the design of a WSN, as normal IP based routing will not suffice Design issues for a routing protocol involve various key parameters like energy awareness, security, QoS requirement etc Energy awareness is one of the vital parameters, as the batteries used in sensor nodes cannot be recharged often Many energy aware protocols were proposed in the literature In this paper, we propose a novel Optimal Path Energy Efficient Routing (OPEER) algorithm for WSNs, which manages uniform load distribution amongst the paths so as to improve the network performance as compared with the traditional energy efficient routing strategies

2 citations


Journal Article
TL;DR: Through simulation results, it has been shown how load balancing reduces the congestion and packet drop rate, which is highly desirable in any kind of network.
Abstract: Routing in networks is a challenging issue. In this paper, we assume routing in an All to All communication mode in an N × N grid optical network. Horizontal-Vertical routing has been employed. This type of routing uses the center node to route packets from source to destination, so the load on the center node increases and packet loss starts. To route the packets from same source to destination, alternate paths are followed (without center). In other words load balancing is done to reduce or eliminate the load at over utilized central node and increase the load at under utilized corner or other intermediate nodes. In this paper, through simulation results it has been shown how load balancing reduces the congestion and packet drop rate, which is highly desirable in any kind of network

Journal Article
TL;DR: The classification of DNA cryptography is proposed and each of its approach briefly is explained briefly and the direction that has to be addressed further in the field of DNA Cryptography is shown.
Abstract: DNA Cryptography is an emerging field of cryptography. It derives its base from DNA Computing. In this DNA has used as computational element along with molecular techniques to make use of it. This paper provides an overall work done in DNA cryptography and proposes the classification of DNA cryptography and explains each of its approach briefly. By reviewing all the possible and leading technology of current research, this paper shows the direction that has to be addressed further in the field of DNA Cryptography.

Journal Article
TL;DR: A novel Priority based Scheduling Algorithm using Fuzzy logic and Artificial neural networks that addresses these aspects simultaneously is proposed and initial results show that a fair amount of fairness is attained while keeping the priority intact.
Abstract: Wireless Interoperability for Microwave Access (WiMAX) is one of the most familiar broadband wireless access technologies that support multimedia transmission. IEEE802.16 Medium Access Control (MAC) covers a large area for bandwidth allocation and QoS mechanisms for various types of applications. Nevertheless, the standard lacks a MAC scheduling algorithm that has a multi-dimensional objective of satisfying QoS requirements of the users, maximizing channel utilization while ensuring fairness among users. So we are proposing a novel Priority based Scheduling Algorithm using Fuzzy logic and Artificial neural networks (ANN) that addresses these aspects simultaneously. The initial results show that a fair amount of fairness is attained while keeping the priority intact. Results also show that maximum channel utilization is achieved with a negligible increment in processing time

Journal Article
TL;DR: This paper is to analyze the coverage and connectivity of Ad Hoc Networks and to enhance the connectivity of the network by using Rician Fading, for this to compute the node isolation probability and coverage with respect to shadowing and fading phenomena in an ad hoc network in the presence of channel randomness.
Abstract: This paper is to analyze the coverage and connectivity of Ad Hoc Networks and to enhance the connectivity of the network by using Rician Fading, for this to compute the node isolation probability and coverage with respect to shadowing and fading phenomena in an ad hoc network in the presence of channel randomness. The concentrate on Rayleigh fading and Rician fading for finding the node isolation probability and also MIMO (multiple input multiple output) technique was used to enhance network coverage and connectivity. Further this paper considering Lognormal Shadowing, Rayleigh Fading and Rician Fading to simulates the graphs between the node isolation probabilities versus node density.

Journal Article
TL;DR: An algorithm for declaring, blocking and discarding a node if it is a malicious node and also the Defense algorithm increasing the received bandwidth and reducing the packet loss of legitimate users.
Abstract: We can’t say anything about behavior of a node in a wireless network scenario. We always aspect a good and gentle behavior from a node irrespective of channel utilization or throughput degradation by a particular node. We can easily declare a node as a non- legitimate/suspicious node but we can’t say what we should do with a node responsible for increasing throughput and reducing unused slots if it is not a legitimate/suspicious node. Throughputs may decrease because of suspicious nodes working as a malicious node, but it can also increase because of not following back-of rules will called as an opportunist node. We have emphasis on identification scheme for node characterization. In our model we have identified some parameters at the MAC Sub layer. We can realize and declare a node as an attacker in the network and can be discard it from the network if it crossed the predefine threshold value calculated on the basis of identified parameters. Before a standard threshold value calculated it can be punished by increasing back-off period by a predefined calculated value. Declared opportunist nodes can be promoted by decreased back-off values. We have proposed an algorithm for declaring, blocking and discarding a node if it is a malicious node and also we will give reward to opportunist nodes. The Defense algorithm increasing the received bandwidth and reducing the packet loss of legitimate users.

Journal Article
TL;DR: A semantic component in the cloud architecture is suggested to support ontology-based representation and facilitates context-based information retrieval that complements cloud schedulers for effective resource management.
Abstract: Cloud computing is a new computing technology of this Internet World. Clouds are a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or services) and accessed via standard protocols and interface with respect to the cloud architecture. Aggregating and monitoring these resources and matching suitable resources for the application become a challenging issue. This paper suggests a semantic component in the cloud architecture to support ontology-based representation and facilitates context-based information retrieval that complements cloud schedulers for effective resource management.

Journal Article
TL;DR: This paper considers sophisticated attacks that are protocol-compliant, non-intrusive, and utilize legitimate application-layer requests to overwhelm system resources, and proposes a counter-mechanism that consists of a suspicion assignment mechanism and a DDoS-resilient scheduler, DDoS Shield.
Abstract: Distributed Denial of Service (DDoS) attacks is flattering ever more challenging with the vast resources and techniques increasingly available to attackers. Distributed Denial of Service (DDoS) attacks constitutes one of the most important threats and among the hardest security problems in today's Internet of particular concern are Distributed Denial of Service (DDoS) attacks, whose collision can be proportionally severe. In this paper, consider sophisticated attacks that are protocol-compliant, non-intrusive, and utilize legitimate application-layer requests to overwhelm system resources. I have characterize application-layer resource attacks as either request flooding, asymmetric, or repeated one-shot, on the basis of the application workload parameters that they exploit. To protect servers from these attacks, propose a counter-mechanism that consists of a suspicion assignment mechanism and a DDoS-resilient scheduler, DDoS Shield. In contrast to prior work, our distrust mechanism assigns a continuous valued vs. binary measure to each client session, and the scheduler utilizes these values to determine if and when to schedule a session’s requests. Using tested experiments on a web application, demonstrate the strength of these resource attacks and evaluate the efficiency of our counter-mechanism. For instance, affect an asymmetric attack which overwhelms the server resources, increasing the response time of legitimate clients from 0.1 seconds to 10 seconds. Under the same attack scenario, DDoS Shield limits the effects of false-negatives and false-positives and improves the victims’ performance to 0.8 seconds.

Journal Article
TL;DR: The performance analysis of the scheduling scheme that is mostly used in the MANET's network is done and it is found that DiffServ is the best scheduling technique as compare to WRR,WF and Strict priority technique.
Abstract: Ad-hoc network are flexible, self-configurable, easy and fast to deploy. There is a growing need to supports better than best-effort quality of services (QoS) in mobile ad-hoc networks. A lot of research has been done mainly in the area of routing and isolated Ad-hoc network. The QoS is very challenging and important in the multimedia and real time application environments. Therefore a QoS architecture for Ad-hoc network is necessary that can internetwork with infracture based QoS approach. In this research paper the performance analysis of the scheduling scheme that is mostly used in the MANET’s network is done. The comparison is based on the end-to-end delay, maximum throughputs and jitter (delay variance). It is found that DiffServ is the best scheduling technique as compare to WRR,WF and Strict priority technique.

Journal Article
TL;DR: This paper’s pivotal focus is on investigation of the impact of session initiation protocol and its future course of supporting applications at various communication related requirements.
Abstract: It has become ubiquitous that VoIP technology has revolutionized the communication media at various spectra. This paper’s pivotal focus is on investigation of the impact of session initiation protocol and its future course of supporting applications at various communication related requirements. As we are aware that VoIP, an extensively used technology, is very popular and is used to transmit both data and voice packets over a single line at the lowest cost.[1]Likewise, VoATM provides synchronous transmission and is the fastest of all. VoFR is another technology gaining colossal importance in the frame relay networks and its combination with VoIP and ATM is being designed for mass usage. Hence, there is always a need to invent and explore these technologies so that the technology is delivered to all the users at a reasonable cost and the best possible service is provided.Thus, each technology is better than the other technology in some way or another. Along side protocols like RTP and SIP protocols are used for multimedia and session oriented applications respectively. This paper tries to give a complete investigation of SIP protocol.[2]

Journal Article
TL;DR: In this new algorithm, the ant individual is transformed by adaptive cauchi transformation and thickness selection and the results show that the convergent speed and computing precision of new algorithm are all very good.
Abstract: Ant Colony Optimization (ACO) is a Meta heuristic combinatorial optimization technique. It is a excellent grouping optimization procedure. A novel ant colony optimization is projected. To advance the penetrating routine the principles of evolutionary algorithm and simulated resistant algorithm have been pooled with the distinctive continuous Ant colony optimization algorithm. In this new algorithm, the ant individual is transformed by adaptive cauchi transformation and thickness selection. To verify the new algorithm the typical functions such as objective function and path construction functions are used. And then the results are versified with continuous ant colony optimization algorithm. The results show that the convergent speed and computing precision of new algorithm are all very good. We can use the algorithm to solve the real time problems like routing, assignment, scheduling.

Journal Article
TL;DR: This paper presents a query routing to provide an effective and efficient querying in Peer-to Peer environment and describes how the queries need to search the target file in a largest number of peers and searching of resources is limited by assigning cluster head among the peers.
Abstract: Peer-to-peer is a decentralized model where each peer has equivalent abilities providing the data (or) services to other peers. Each peer manages its own data. In this paper we present a query routing to provide an effective and efficient querying in Peer-to Peer environment. We describe how the queries need to search the target file in a largest number of peers and searching of resources is limited by assigning cluster head among the peers. The ant agent searching technique is used for query routing and ranking methodology to select the peers to minimize the overhead of network, delay.