scispace - formally typeset
Search or ask a question

Showing papers in "Int'l J. of Communications, Network and System Sciences in 2015"


Journal ArticleDOI
TL;DR: A survey of both theoretical and numerical aspects of compressive sensing technique and its applications, which has many potential applications in signal processing, wireless communication, cognitive radio and medical imaging.
Abstract: In digital signal processing (DSP), Nyquistrate sampling completely describes a signal by exploiting its bandlimitedness. Compressed Sensing (CS), also known as compressive sampling, is a DSP technique efficiently acquiring and reconstructing a signal completely from reduced number of measurements, by exploiting its compressibility. The measurements are not point samples but more general linear functions of the signal. CS can capture and represent sparse signals at a rate significantly lower than ordinarily used in the Shannon’s sampling theorem. It is interesting to notice that most signals in reality are sparse; especially when they are represented in some domain (such as the wavelet domain) where many coefficients are close to or equal to zero. A signal is called K-sparse, if it can be exactly represented by a basis, , and a set of coefficients , where only K coefficients are nonzero. A signal is called approximately K-sparse, if it can be represented up to a certain accuracy using K non-zero coefficients. As an example, a K-sparse signal is the class of signals that are the sum of K sinusoids chosen from the N harmonics of the observed time interval. Taking the DFT of any such signal would render only K non-zero values . An example of approximately sparse signals is when the coefficients , sorted by magnitude, decrease following a power law. In this case the sparse approximation constructed by choosing the K largest coefficients is guaranteed to have an approximation error that decreases with the same power law as the coefficients. The main limitation of CS-based systems is that they are employing iterative algorithms to recover the signal. The sealgorithms are slow and the hardware solution has become crucial for higher performance and speed. This technique enables fewer data samples than traditionally required when capturing a signal with relatively high bandwidth, but a low information rate. As a main feature of CS, efficient algorithms such as -minimization can be used for recovery. This paper gives a survey of both theoretical and numerical aspects of compressive sensing technique and its applications. The theory of CS has many potential applications in signal processing, wireless communication, cognitive radio and medical imaging.

65 citations


Journal ArticleDOI
TL;DR: Diagnostic tests and examination of forecast accuracy measures indicate that the multiplicative seasonal ARIMA/GARCH model shows a good estimation when dealing with volatility clustering in the data series and can be considered to be a flexible model to capture well the characteristics of EVN traffic series and give reasonable forecasting results.
Abstract: This paper highlights the statistical procedure used in developing models that have the ability of capturing and forecasting the traffic of mobile communication network operating in Vietnam. To build such models, we follow Box-Jenkins method to construct a multiplicative seasonal ARIMA model to represent the mean component using the past values of traffic, then incorporate a GARCH model to represent its volatility. The traffic is collected from EVN Telecom mobile communication network. Diagnostic tests and examination of forecast accuracy measures indicate that the multiplicative seasonal ARIMA/GARCH model, i.e. ARIMA (1, 0, 1) × (0, 1, 1)24/GARCH (1, 1) shows a good estimation when dealing with volatility clustering in the data series. This model can be considered to be a flexible model to capture well the characteristics of EVN traffic series and give reasonable forecasting results. Moreover, in such situations that the volatility is not necessary to be taken into account, i.e. short-term prediction, the multiplicative seasonal ARIMA/GARCH model still acts well with the GARCH parameters adjusted to GARCH (0, 0).

44 citations


Journal ArticleDOI
TL;DR: Three hybrid models are investigated to develop an accurate and efficient churn prediction model and the comparison with the other models shows that the three hybrid models outperformed single common models.
Abstract: The term “customer churn” is used in the industry of information and communication technology (ICT) to indicate those customers who are about to leave for a new competitor, or end their subscription. Predicting this behavior is very important for real life market and competition, and it is essential to manage it. In this paper, three hybrid models are investigated to develop an accurate and efficient churn prediction model. The three models are based on two phases; the clustering phase and the prediction phase. In the first phase, customer data is filtered. The second phase predicts the customer behavior. The first model investigates the k-means algorithm for data filtering, and Multilayer Perceptron Artificial Neural Networks (MLP-ANN) for prediction. The second model uses hierarchical clustering with MLP-ANN. The third one uses self organizing maps (SOM) with MLP-ANN. The three models are developed based on real data then the accuracy and churn rate values are calculated and compared. The comparison with the other models shows that the three hybrid models outperformed single common models.

43 citations


Journal ArticleDOI
TL;DR: A single-lead ECG compression method has been proposed based on improving the signal sparisty through the extraction of the signal significant features and results suggest that CS should be considered as an acceptable methodology for ECGs compression.
Abstract: Diagnoses of heart diseases can be done effectively on long term recordings of ECG signals that preserve the signals’ morphologies. In these cases, the volume of the ECG data produced by the monitoring systems grows significantly. To make the mobile healthcare possible, the need for efficient ECG signal compression algorithms to store and/or transmit the signal efficiently has been rising exponentially. Currently, ECG signal is acquired at Nyquist rate or higher, thus introducing redundancies between adjacent heartbeats due to its quasi-periodic structure. Existing compression methods remove these redundancies by achieving compression and facilitate transmission of the patient’s imperative information. Based on the fact that these signals can be approximated by a linear combination of a few coefficients taken from different basis, an alternative new compression scheme based on Compressive Sensing (CS) has been proposed. CS provides a new approach concerned with signal compression and recovery by exploiting the fact that ECG signal can be reconstructed by acquiring a relatively small number of samples in the “sparse” domains through well-developed optimization procedures. In this paper, a single-lead ECG compression method has been proposed based on improving the signal sparisty through the extraction of the signal significant features. The proposed method starts with a preprocessing stage that detects the peaks and periods of the Q, R and S waves of each beat. Then, the QRS-complex for each signal beat is estimated. The estimated QRS-complexes are subtracted from the original ECG signal and the resulting error signal is compressed using the CS technique. Throughout this process, DWT sparsifying dictionaries have been adopted. The performance of the proposed algorithm, in terms of the reconstructed signal quality and compression ratio, is evaluated by adopting DWT spatial domain basis applied to ECG records extracted from the MIT-BIH Arrhythmia Database. The results indicate that average compression ratio of 11:1 with PRD1 = 1.2% are obtained. Moreover, the quality of the retrieved signal is guaranteed and the compression ratio achieved is an improvement over those obtained by previously reported algorithms. Simulation results suggest that CS should be considered as an acceptable methodology for ECG compression.

41 citations


Journal ArticleDOI
TL;DR: The status of digital divide amongst Jordanian Telecentres is investigated to analyze the impact of the perceived usefulness, the perceived ease-of-use and the facilitating conditions on the behavioral intention of employees to find the effect of the three mentioned factors on the user acceptance.
Abstract: This study investigates the status of digital divide amongst Jordanian Telecentres. The objective of this study is to analyze the impact of the perceived usefulness, the perceived ease-of-use and the facilitating conditions on the behavioral intention of employees. In addition, the study investigates the effect of the three mentioned factors on the user acceptance while the behavioral intention on the user acceptance is used as a mediating factor. The proposed research model was validated by distributing 150 survey questionnaires to the Telecentres in Jordan. Structural equation modeling (SEM) technique was used to analyze the results. One of the main limitations of this study is that the results cannot be universal since the study was limited to Jordan. Similar study needs to be conducted in different courtiers to either support or negate our results.

36 citations


Journal ArticleDOI
TL;DR: This paper explores the benefits, types and security issues of Virtualization Hypervisor in virtualized hardware environment.
Abstract: The concept of virtualization machines is not new, but it is increasing vastly and gaining popularity in the IT world. Hypervisors are also popular for security as a means of isolation. The virtualization of information technology infrastructure creates the enablement of IT resources to be shared and used on several other devices and applications; this increases the growth of business needs. The environment created by virtualization is not restricted to any configuration physically or execution. The resources of a computer are shared logically. Hypervisors help in virtualization of hardware that is a software interact with the physical system, enabling or providing virtualized hardware environment to support multiple running operating system simultaneously utilizing one physical server. This paper explores the benefits, types and security issues of Virtualization Hypervisor in virtualized hardware environment.

34 citations


Journal ArticleDOI
TL;DR: The issue of spam detection is investigated with the aim to develop an efficient method to identify spam email based on the analysis of the content of email messages and a set of features that have a considerable number of malicious related features are identified.
Abstract: Spam is no longer just commercial unsolicited email messages that waste our time, it consumes network traffic and mail servers’ storage. Furthermore, spam has become a major component of several attack vectors including attacks such as phishing, cross-site scripting, cross-site request forgery and malware infection. Statistics show that the amount of spam containing malicious contents increased compared to the one advertising legitimate products and services. In this paper, the issue of spam detection is investigated with the aim to develop an efficient method to identify spam email based on the analysis of the content of email messages. We identify a set of features that have a considerable number of malicious related features. Our goal is to study the effect of these features in helping the classical classifiers in identifying spam emails. To make the problem more challenging, we developed spam classification models based on imbalanced data where spam emails form the rare class with only 16.5% of the total emails. Different metrics were utilized in the evaluation of the developed models. Results show noticeable improvement of spam classification models when trained by dataset that includes malicious related features.

32 citations


Journal ArticleDOI
TL;DR: This research paper investigates the current and existing security issues associated with the VANET and exposes any slack amongst them in order to lighten possible problem domains in this field.
Abstract: There is a significant increase in the rates of vehicle accidents in countries around the world and also the casualties involved ever year. New technologies have been explored relating to the Vehicular Ad Hoc Network (VANET) due to the increase in vehicular traffic/congestions around us. Vehicular communication is very important as technology has evolved. The research of VANET and development of proposed systems and implementation would increase safety among road users and improve the comfort for the corresponding passengers, drivers and also other road users, and a great improvement in the traffic efficiency would be achieved. This research paper investigates the current and existing security issues associated with the VANET and exposes any slack amongst them in order to lighten possible problem domains in this field.

30 citations


Journal ArticleDOI
TL;DR: This article highlights the different fault tolerance mechanism in distributed systems used to prevent multiple system failures on multiple failure points by considering replication, high redundancy and high availability of the distributed services.
Abstract: The use of technology has increased vastly and today computer systems are interconnected via different communication medium. The use of distributed systems in our day to day activities has solely improved with data distributions. This is because distributed systems enable nodes to organise and allow their resources to be used among the connected systems or devices that make people to be integrated with geographically distributed computing facilities. The distributed systems may lead to lack of service availability due to multiple system failures on multiple failure points. This article highlights the different fault tolerance mechanism in distributed systems used to prevent multiple system failures on multiple failure points by considering replication, high redundancy and high availability of the distributed services.

29 citations


Journal ArticleDOI
TL;DR: A clear guide to e-commerce companies sitting on huge volume of data to easily manipulate the data for business improvement which in return will place them highly competitive among their competitors is provided in this paper.
Abstract: Huge volume of structured and unstructured data which is called big data, nowadays, provides opportunities for companies especially those that use electronic commerce (e-commerce). The data is collected from customer’s internal processes, vendors, markets and business environment. This paper presents a data mining (DM) process for e-commerce including the three common algorithms: association, clustering and prediction. It also highlights some of the benefits of DM to e-commerce companies in terms of merchandise planning, sale forecasting, basket analysis, customer relationship management and market segmentation which can be achieved with the three data mining algorithms. The main aim of this paper is to review the application of data mining in e-commerce by focusing on structured and unstructured data collected thorough various resources and cloud computing services in order to justify the importance of data mining. Moreover, this study evaluates certain challenges of data mining like spider identification, data transformations and making data model comprehensible to business users. Other challenges which are supporting the slow changing dimensions of data, making the data transformation and model building accessible to business users are also evaluated. A clear guide to e-commerce companies sitting on huge volume of data to easily manipulate the data for business improvement which in return will place them highly competitive among their competitors is also provided in this paper.

20 citations


Journal ArticleDOI
TL;DR: This research is going to compare the WEP and WPA mechanism for better understanding of their working principles and security bugs.
Abstract: Data security in wireless network has posed as a threat that has stuck to the core of data communication from point A to point B. There have been a variety of security issues raised in wired and wireless networks and security specialists proposed a variety of solutions. The proposed security solutions in wired networks could not be successfully implemented in wireless networks to identify, authenticate and authorize users due to infrastructural and working principles of wireless networks. Data on wireless network are much exposed to threats because the network is been broadcasted unlike a wired network. Researchers have proposed WEP and WPA to provide security in wireless networks. This research is going to compare the WEP and WPA mechanism for better understanding of their working principles and security bugs.

Journal ArticleDOI
TL;DR: In this article, the authors focused on investigating the perceived trust surrogated by a number of hy-pothesized factors and its effect on the choice of method of payment and confirmed the seven main hypotheses of the research that were related to testing if some factors were important to forming perceived trust by customers.
Abstract: This empirical study focused on investigating the perceived trust surrogated by a number of hy-pothesized factors and its effect on the choice of method of payment. The data were collected using a questionnaire, as the instrument for the primary data collection, with total collected back responses of 214 from customers of MarkaVIP. Structural equation modeling technique was used to fully analyze the data in order to determine what level of the relationship between the constituting factors of the perceived trust and the method of payment. The main findings were related to confirming the seven main hypotheses of the research that were related to testing if some factors were important to forming perceived trust by customers. Four factors (reputation, security, familiarity, and ease of use) were found to have a positive effect and the remaining three were not (privacy, size and usefulness). In addition, having perceived trust meant no preference to any method of payment by the customers.

Journal ArticleDOI
Arif Sari1
TL;DR: The deployment model of two-tier hierarchical cluster topology architecture is introduced and different jamming techniques proposed for WSN are investigated by creating specific classification of different types of jamming attacks.
Abstract: The 802.15.4 Wireless Sensor Networks (WSN) becomes more economical, feasible and sustainable for new generation communication environment, however their limited resource constraints such as limited power capacity make them difficult to detect and defend themselves against variety of attacks. The radio interference attacks that generate for WSN at the Physical Layer cannot be defeated through conventional security mechanisms proposed for 802.15.4 standards. The first section introduces the deployment model of two-tier hierarchical cluster topology architecture and investigates different jamming techniques proposed for WSN by creating specific classification of different types of jamming attacks. The following sections expose the mitigation techniques and possible built-in mechanisms to mitigate the link layer jamming attacks on proposed two-tier hierarchical clustered WSN topology. The two-tier hierarchical cluster based topology is investigated based on contention based protocol suite through OPNET simulation scenarios.

Journal ArticleDOI
Arif Sari1
TL;DR: TORA protocol has a modification of new RTS/CTS mechanism and has been simulated in order to be compared with proposed lightweight robust forwarding GRP scheme in terms of specific performance metrics such as network throughput, end-to-end delay and message flooding rate over the network through OPNET simulation package.
Abstract: The Gossip-Based Relay Protocol (GRP) is developed based on Ad Hoc on Demand Distance Vector Protocol (AODV) and proposed to increase the efficiency of package routing functionality in ad hoc networks through specific flooding scheme. This lightweight protocol reduced the collisions on the network through specific mechanisms. Request to Send/Clear to Send (RTS/CTS) mechanism is widely used in ad hoc environment with Temporarily Ordered Routing Algorithm (TORA) in order to eliminate collisions and allow access to the shared medium through proposed authentication methods. Since GRP contains specific mechanism of directed acyclic Graph (DAG) mechanism to mitigate overhead problem, RTS/CTS modified TORA might result in similar performance metrics. In this paper, TORA protocol has a modification of new RTS/CTS mechanism and has been simulated in order to be compared with proposed lightweight robust forwarding GRP scheme in terms of specific performance metrics such as network throughput, end-to-end delay and message flooding rate over the network through OPNET simulation package in order to expose the optimal solution to increase overall network throughput in ad hoc environment.

Journal ArticleDOI
TL;DR: Assessment results show that the output timing of terminals should be adjusted to the terminal which has the latest output timing to maintain the fairness when the difference in network delay between the terminals is large and the comprehensive quality at each terminal can be maintained as high as possible.
Abstract: In this paper, we investigate the influences of network delay on QoE (Quality of Experience) such as the operability of haptic interface device and the fairness between players for soft objects in a networked real-time game subjectively and objectively. We handle a networked balloon bursting game in which two players burst balloons (i.e., soft objects) in a 3D virtual space by using haptic interface devices, and the players compete for the number of burst balloons. As a result, we find that the operability depends on the network delay from the local terminal to the other terminal, and the fairness is mainly dependent on the difference in network delay between the players’ terminals. We confirm that there exists a trade-off relationship between the operability and the fairness. We also see that the contribution of the fairness is larger than that of the operability to the comprehensive quality (i.e., the weighted sum of the operability and fairness). Assessment results further show that the output timing of terminals should be adjusted to the terminal which has the latest output timing to maintain the fairness when the difference in network delay between the terminals is large. In this way, the comprehensive quality at each terminal can be maintained as high as possible.

Journal ArticleDOI
TL;DR: A hybrid method for the compression of solar radiation using predictive analysis is presented, which could improve the accuracy of analysis concerning climate studies and help in congestion control.
Abstract: The prediction of solar radiation is important for several applications in renewable energy research. There are a number of geographical variables which affect solar radiation prediction, the identification of these variables for accurate solar radiation prediction is very important. This paper presents a hybrid method for the compression of solar radiation using predictive analysis. The prediction of minute wise solar radiation is performed by using different models of Artificial Neural Networks (ANN), namely Multi-layer perceptron neural network (MLPNN), Cascade feed forward back propagation (CFNN) and Elman back propagation (ELMNN). Root mean square error (RMSE) is used to evaluate the prediction accuracy of the three ANN models used. The information and knowledge gained from the present study could improve the accuracy of analysis concerning climate studies and help in congestion control.

Journal ArticleDOI
TL;DR: The findings of this study exposed the impact of e-government practices and differences between them in terms of applicability and provide a specific point of view for m-government adoption policy.
Abstract: Information Communication Technologies (ICT) has offered m-government applications as an intermediate technology to provide effective and efficient government services to the public. Due to high rate of corruptions in developing states, government policies diversified governmental services from offline to virtualized perspective to expose accessibility, transparency, accountability and accessibility through mobile government. Deployment of such ICT tool also exposed a unique opportunity for the recovery of the public confidence against government which has damaged due to corruption activities in country. Virtualization of the government services became compulsory due to high rate of corruption that occurred in the economic context and it became a serious obstacle for economic development of developing states. The virtualized services aimed to harmonize governmental services into mobile platform in order to become more transparent to the public. This research paper comparatively investigates the mobile government services that are located in Malta and Singapore which are classified as developing countries. The criteria of the comparison have done based on demographic structure of the country, M-government policies and ICT infrastructure of the country. The findings of this study exposed the impact of e-government practices and differences between them in terms of applicability and provide a specific point of view for m-government adoption policy.

Journal ArticleDOI
TL;DR: This paper proposes a new technique to localize mobile sensor nodes using sectorized antenna, considers that both sensor nodes and seeds are mobile, and argues that mobility can be exploited to improve the accuracy and precision of localization.
Abstract: Recently, there has been much focus on mobile sensor networks, and we have even seen the development of small-profile sensing devices that are able to control their own movement. Although it has been shown that mobility alleviates several issues relating to sensor network coverage and connectivity, many challenges remain. Among these, the need for position estimation is perhaps the most important. It is too expensive to include a GPS receiver with every sensor node. Hence, localization schemes for sensor networks typically use a small number of seed nodes that know their location and protocols whereby other sensor nodes estimate their location from the messages they receive. In this paper, we propose a new technique to localize mobile sensor nodes using sectorized antenna. We consider that both sensor nodes and seeds are mobile, and argue that mobility can be exploited to improve the accuracy and precision of localization. It is tested extensively in a simulation environment and compared with other existing methods. The results of our experiments clearly indicate that our proposed approach can achieve a high accuracy without need of high density of seeds.

Journal ArticleDOI
TL;DR: This research is going to expose the mechanisms and measures of data security in wireless networks from the reactive security approaches point of view and exposes the reactive approaches used to enhance data security.
Abstract: There have been various security measures that deal with data security in wired or wireless network, where these measures help to make sure that data from one point to another is intact, by identifying, authenticating, authorizing the right users and also encrypting the data over the network. Data communication between computers has brought about countless benefits to users, but the same information technologies have created a gap, a vulnerable space in the communication medium, where the data that’s been exchanged or transferred, thereby causing threats to the data. Especially data on wireless networks are much exposed to threats since the network has been broadcasted unlike a wired network. Data security in the past dealth with integrity, confidentiality and ensuring authorized usage of the data and the system. Less or no focus was placed on the reactive approach or measures to data security which is capable of responding properly to mitigate an attacker and avoid harm and also to prevent future attacks. This research is going to expose the mechanisms and measures of data security in wireless networks from the reactive security approaches point of view and exposes the reactive approaches used to enhance data security.

Journal ArticleDOI
TL;DR: This paper presents a design methodology to provide network connectivity from a landline node in a rural region at very low cost, starting by a deep analysis of the region in order to identify relevant constraints and useful applications to sustain local activities and communication.
Abstract: Wireless Mesh Network is presented as an appealing solution for bridging the digital divide between developed and under-developed regions. But the planning and deployment of these networks are not just a technical matter, since the success depends on many other factors tied to the related region. Although we observe some deployments, to ensure usefulness and sustainability, there is still a need of concrete design process model and proper network planning approach for rural regions, especially in Sub-Saharan Africa. This paper presents a design methodology to provide network connectivity from a landline node in a rural region at very low cost. We propose a methodology composed of ten steps, starting by a deep analysis of the region in order to identify relevant constraints and useful applications to sustain local activities and communication. Approach for planning the physical architecture of the network is based on an indoor-outdoor deployment for reducing the overall cost of the network.

Journal ArticleDOI
TL;DR: A connectivity-dependent data propagation scheme, in which each terminal transfers data adaptively by wireless multi-hop data transfer or store-and-forward data transfer depending on whether the terminal has connections to its neighboring terminals, and two types of graphical user interface (GUI) for both normal and disaster situations are proposed.
Abstract: Dual-purpose systems for both normal and disaster situations are necessary for providing continuous services from normal situations to disaster situations. We have been developing the dual-purposed systems based on the assurance network design principle. The assurance network design principle makes the dual-purpose systems work stably in both normal and disaster situations. This paper proposes a connectivity-dependent data propagation scheme, in which each terminal transfers data adaptively by wireless multi-hop data transfer or store-and-forward data transfer depending on whether the terminal has connections to its neighboring terminals. To verify the resilience against disconnection among neighboring terminals, we show field experimental results on data propagation time. Also we propose the dual-purpose system, in which there are two types of graphical user interface (GUI) for both situations. Whenever each terminal receives a special packet in disaster situations, the GUI automatically switches from one type for normal situations to another type for disaster situations. We have unified these two types of GUI so that users can understand how to use them even when GUI is automatically switched. To validate feasibility of the dual-purpose normal and disaster situations system, we show experimental results on dissemination of assessment information and automatical switching of GUIs.

Journal ArticleDOI
TL;DR: The relation between 12 input and 9 output parameters is investigated in this research that is collected between 54 companies in Turkey which indicated that the relationship between organizational management performance and relationship management can be modelled through nonlinearly.
Abstract: The relation between the HRM and the firm performance is analyzed statistically by many researchers in the literature. However, there are very few nonlinear approaches in literature for finding the relation between Human Resource Management (FIRM) and firm performance. This paper exposes the relationship between human resource management and organizational performance through the use of nonlinear modeling technique. The modeling is proposed based on Radial Basis Function (RBF) which is nonlinear modeling technique in literature. The relation between 12 input and 9 output parameters is investigated in this research that is collected between 54 companies in Turkey which indicated that the relationship between organizational management performance and relationship management can be modelled through nonlinearly.

Journal ArticleDOI
TL;DR: This paper is the first study that addresses performance evaluation of MACaw protocol under a constant Jamming Attack and the performance of MACAW protocol is simulated through OPNET Modeler 14.5 software.
Abstract: Jamming attack is quite serious threat for Mobile networks that collapses all necessary communication infrastructure. Since mobile nodes in Mobile Ad Hoc Networks (MANET) communicate in a multi-hop mode, there is always a possibility for an intruder to launch a jamming attack in order to intercept communication among communication nodes. In this study, a network simulation has been carried out in order to explore and evaluate the possible impacts of jamming attack on MACAW protocol. Ad-hoc network modelling is used to provide communication infrastructure among mobile nodes in order to modelling the simulation scenarios. In simulation model, these nodes have used AODV routing protocol which is designed for MANET while second scenario contains simulated MACAW node models for comparison. On the other hand, this paper is the first study that addresses performance evaluation of MACAW protocol under a constant Jamming Attack. The performance of MACAW protocol is simulated through OPNET Modeler 14.5 software.

Journal ArticleDOI
TL;DR: This study examined several different routing protocols, and evaluated the performance of three: the Ad Hoc On-Demand Distance Vector Protocol (AODV), the Destination-Sequenced Distance-Vector Routing (DSDV), and the Dynamic Source Routed (DSR).
Abstract: Mobile ad hoc networks use many different routing protocols to route data packets among nodes. Various routing protocols have been developed, and their usage depends on the application and network architecture. This study examined several different routing protocols, and evaluated the performance of three: the Ad Hoc On-Demand Distance Vector Protocol (AODV), the Destination-Sequenced Distance-Vector Routing (DSDV), and the Dynamic Source Routing (DSR). These three protocols were evaluated on a network with nodes ranging from 50 to 300, using performance metrics such as average delay, jitter, normal overhead, packet delivery ratio, and throughput. These performance metrics were measured by changing various parameters of the network: queue length, speed, and the number of source nodes. AODV performed well in high mobility and high density scenarios, whereas DSDV performed well when mobility and the node density were low. DSR performed well in low-mobility scenarios. All the simulations were performed in NS2 simulator.

Journal ArticleDOI
TL;DR: Simulations demonstrate that EFRED achieves a more stable throughput and performs better than current active queue management algorithms due to decrease the packets loss percentage and lowest in queuing delay, end to end delay and delay variation (JITTER).
Abstract: Quality of Service (QoS) generally refers to measurable like latency and throughput, things that directly affect the user experience Queuing (the most popular QoS tool) involves choosing the packets to be sent based on something other than arrival time The Active queue management is important subject to manage this queue to increase the effectiveness of Transmission Control Protocol networks Active queue management (AQM) is an effective means to enhance congestion control, and to achieve trade-off between link utilization and delay The de facto standard, Random Early Detection (RED), and many of its variants employ queue length as a congestion indicator to trigger packet dropping One of these enhancements of RED is FRED or Fair Random Early Detection attempts to deal with a fundamental aspect of RED in that it imposes the same loss rate on all flows, regardless of their bandwidths FRED also uses per-flow active accounting, and tracks the state of active flows FRED protects fragile flows by deterministically accepting flows from low bandwidth connections and fixes several shortcomings of RED by computing queue length during both arrival and departure of the packet Unlike FRED, we propose a new scheme that used hazard rate estimated packet dropping function in FRED We call this new scheme Enhancement Fair Random Early Detection The key idea is that, with EFRED Scheme change packet dropping function, to get packet dropping less than RED and other AQM algorithms like ARED, REM, RED, etc Simulations demonstrate that EFRED achieves a more stable throughput and performs better than current active queue management algorithms due to decrease the packets loss percentage and lowest in queuing delay, end to end delay and delay variation (JITTER)

Journal ArticleDOI
TL;DR: Through intensive simulations it is demonstrated that the Adaptive TCP outperforms other well-established TCP variants, and yields more than 100% of the throughput performance and more than 60% of improvement for bandwidth utilization, compared to TCP NewReno.
Abstract: This paper presents a sender side only TCP mechanism to prevent compromise for bandwidth utilization in IEEE 802.11 wireless networks. In absence of mechanism for accurate and immediate loss discrimination, the TCP sender unnecessarily reduces its Loss Window in response to the packet losses due to transmission errors. At the same time, frequent transmission losses and associated link retransmissions cause inaccuracy for available bandwidth estimate. The proposal, Adaptive TCP tackles the above issues using two refinements. First, sender estimates the degree of congestion by exploiting the statistics for estimated Round Trip Time (RTT). With this, it prevents unnecessary shrinkage of Loss Window and bandwidth estimate. Second, by concluding the uninterrupted evolution of its sending rate in recent past, the Adaptive TCP advances bandwidth estimate under favorable network conditions. This in turn, facilitates for quick growth in TCP’s sending rate after loss recovery and consequently alleviates bandwidth utilization. The authors implement the algorithm on top of TCP NewReno, evaluate and compare its performance with the wireless TCP variants deployed in current Internet. Through intensive simulations it is demonstrated that the Adaptive TCP outperforms other well-established TCP variants, and yields more than 100% of the throughput performance and more than 60% of improvement for bandwidth utilization, compared to TCP NewReno. The simulation results also demonstrated compatibility of Adaptive TCP in a shared wireless environment.

Journal ArticleDOI
TL;DR: This research exposes the impact of determinants that influences mobile commerce application users’ attitudes by classifying and investigating the internal and external variables in a case study of Cyprus Research Centre.
Abstract: The Mobile Commerce (m-commerce) becomes very powerful tool in the competitive business markets. Companies started to use this technology to attract their customers and catch their attention. Usage of Mobile commerce applications spreaded around different countries and became very popular. Different communication protocols and security techniques are designed for business use of m-commerce. Mobile Commerce, likewise the e-commerce brought significant difference in the market. People start to use this technology by feeling the freedom of having transactions at anywhere and anytime. However, consumers face lot of difficulties while using this technology which is consumer-based or service provider based. This research exposes the impact of determinants that influences mobile commerce application users’ attitudes by classifying and investigating the internal and external variables in a case study of Cyprus Research Centre.

Journal ArticleDOI
TL;DR: A new local gateway assisted handover key derivation schema is proposed that can meet the fast derivation and good forward/backward key secrecy requirement of handoverKey derivation in enterprise femtocell network.
Abstract: With the dense deployment of femtocells in enterprise femtocell network and the small coverage of femtocells, handover in enterprise femtocell network will be frequent. The general handover key derivation method which is used in handover procedures in LTE is not suitable for handover in this scenario because of its long time cost and the weak security. To solve this problem, this paper has proposed a new local gateway assisted handover key derivation schema in enterprise femtocell network. It can meet the fast derivation and good forward/backward key secrecy requirement of handover key derivation in enterprise femtocell network. The simulation result has verified that the proposed handover key derivation schema works better than the existing method.

Journal ArticleDOI
TL;DR: Techniques for low-power operation are shown in this paper, which use the lowest possible supply voltage coupled with architectural, logic style, circuit, and technology optimizations.
Abstract: Power consumption is the bottleneck of system performance. Power reduction has become an important issue in digital circuit design, especially for high performance portable devices (such as cell phones, PDAs, etc.). Many power reduction techniques have also been proposed from the system level down to the circuit level. High-speed computation has thus become the expected norm from the average user, instead of being the province of the few with access to a powerful mainframe. Power must be added to the portable unit, even when power is available in non-portable applications, the issue of low-power design is becoming critical. Thus, it is evident that methodologies for the design of high-throughput, low-power digital systems are needed. Techniques for low-power operation are shown in this paper, which use the lowest possible supply voltage coupled with architectural, logic style, circuit, and technology optimizations. The threshold vol-tages of the MTCMOS devices for both low and high Vth are constructed as the low threshold Vth is approximately 150 - 200 mv whereas the high threshold Vth is managed by varying the thickness of the oxide Tox. Hence we are using different threshold voltages with minimum voltages and hence considered this project as ultra-low power designing.

Journal ArticleDOI
TL;DR: The basic issues and key elements of IPv6 translation transition mechanisms are investigated, the applicability of existing proposed translation techniques based on the presented index system are analyzed, and its first applicability index system is presented.
Abstract: Due to the exhaustion of IPv4 address resources, the transition from IPv4 to IPv6 is inevitable and fairly urgent. Numerous transition mechanisms have been proposed to solve challenging issues of IPv6 transition. An inter-connection between IPv4 and IPv6 networks or hosts requirement has been happening throughout the IPv6 transition process. And one-time translation scheme is indispensable to achieve the inter-connection. In addition, double translation can be used in the IPv4-IPv6-IPv4 scenario. As a long-term strategy, translation scheme is important and inevitable. However, because of the diverse characteristics and transition requirements of practical networks and the lack of applicability analysis, the selection and deployment of transition mechanisms are facing with grand challenges. Targeting at those challenges, this paper investigates the basic issues and key elements of IPv6 translation transition mechanisms, and presents its first applicability index system. In particular, we analyze the applicability of existing proposed translation techniques based on the presented index system, which has significant guidance in the practical deployment of IPv6 transition techniques.