scispace - formally typeset
Search or ask a question

Showing papers in "Telecommunication Systems in 2011"


Journal ArticleDOI
TL;DR: This paper presents a bio-inspired trust and reputation model, called BTRM-WSN, based on ant colony systems aiming at providing Trust and reputation in WSNs, and demonstrates the accuracy, robustness and lightness of the proposed model in a wide set of situations.
Abstract: Wireless Sensor Networks (WSNs) are becoming more and more spread and both industry and academia are focusing their research efforts in order to improve their applications. One of the first issues to solve in order to achieve that expected improvement is to assure a minimum level of security in such a restrictive environment. Even more, ensuring confidence between every pair of interacting nodes is a critical issue in this kind of networks. Under these conditions we present in this paper a bio-inspired trust and reputation model, called BTRM-WSN, based on ant colony systems aiming at providing trust and reputation in WSNs. Experiments and results demonstrate the accuracy, robustness and lightness of the proposed model in a wide set of situations.

159 citations


Journal ArticleDOI
TL;DR: A face liveness detection system against spoofing with photographs, videos, and 3D models of a valid user in a face recognition system that does not need user collaborations and runs in a non-intrusive manner.
Abstract: This paper presents a face liveness detection system against spoofing with photographs, videos, and 3D models of a valid user in a face recognition system. Anti-spoofing clues inside and outside a face are both exploited in our system. The inside-face clues of spontaneous eyeblinks are employed for anti-spoofing of photographs and 3D models. The outside-face clues of scene context are used for anti-spoofing of video replays. The system does not need user collaborations, i.e. it runs in a non-intrusive manner. In our system, the eyeblink detection is formulated as an inference problem of an undirected conditional graphical framework which models contextual dependencies in blink image sequences. The scene context clue is found by comparing the difference of regions of interest between the reference scene image and the input one, which is based on the similarity computed by local binary pattern descriptors on a series of fiducial points extracted in scale space. Extensive experiments are carried out to show the effectiveness of our system.

134 citations


Journal ArticleDOI
TL;DR: The experimental results show that the introduced hybrid scheme using GA has obtain better performance than the other reported wavelet thresholding algorithms as well as the quality of the denoising ECG signal is more suitable for the clinical diagnosis.
Abstract: This paper introduces an effective hybrid scheme for the denoising of electrocardiogram (ECG) signals corrupted by non-stationary noises using genetic algorithm (GA) and wavelet transform (WT). We first applied a wavelet denoising in noise reduction of multi-channel high resolution ECG signals. In particular, the influence of the selection of wavelet function and the choice of decomposition level on efficiency of denoising process was considered. Selection of a suitable wavelet denoising parameters is critical for the success of ECG signal filtration in wavelet domain. Therefore, in our noise elimination method the genetic algorithm has been used to select the optimal wavelet denoising parameters which lead to maximize the filtration performance. The efficiency performance of our scheme is evaluated using percentage root mean square difference (PRD) and signal to noise ratio (SNR). The experimental results show that the introduced hybrid scheme using GA has obtain better performance than the other reported wavelet thresholding algorithms as well as the quality of the denoising ECG signal is more suitable for the clinical diagnosis.

105 citations


Journal ArticleDOI
TL;DR: A number of statistically significant observations are obtained regarding the robustness of the systems regarding the vulnerabilities of fingerprint-based recognition systems to direct attacks with and without the cooperation of the user.
Abstract: The vulnerabilities of fingerprint-based recognition systems to direct attacks with and without the cooperation of the user are studied. Two different systems, one minutiae-based and one ridge feature-based, are evaluated on a database of real and fake fingerprints. Based on the fingerprint images quality and on the results achieved on different operational scenarios, we obtain a number of statistically significant observations regarding the robustness of the systems.

74 citations


Journal ArticleDOI
TL;DR: An overview of GRASP is given describing its basic components and enhancements to the basic procedure, including reactive GRasP and intensification strategies.
Abstract: GRASP (Greedy Randomized Adaptive Search Procedures) is a multistart metaheuristic for producing good-quality solutions of combinatorial optimization problems. Each GRASP iteration is usually made up of a construction phase, where a feasible solution is constructed, and a local search phase which starts at the constructed solution and applies iterative improvement until a locally optimal solution is found. While, in general, the construction phase of GRASP is a randomized greedy algorithm, other types of construction procedures have been proposed. Repeated applications of a construction procedure yields diverse starting solutions for the local search. This paper gives an overview of GRASP describing its basic components and enhancements to the basic procedure, including reactive GRASP and intensification strategies.

72 citations


Journal ArticleDOI
TL;DR: This work proves the Nash Equilibria of the game for pure and mixed strategies, the expected payoffs and the price of anarchy corresponding to these equilibria, and forms a clustering mechanism, called Clustered Routing for Selfish Sensors—CROSS, that can be applied to sensor networks in practice.
Abstract: Game theory has been used for decades in fields of science such as economics and biology, but recently it was used to model routing and packet forwarding in wireless ad-hoc and sensor networks. However, the clustering problem, related to self-organization of nodes into large groups, has not been studied under this framework. In this work our objective is to provide a game theoretical modeling of clustering for ad-hoc and sensor networks. The analysis is based on a non-cooperative game approach where each sensor behaves selfishly in order to conserve its energy and thus maximize its lifespan. We prove the Nash Equilibria of the game for pure and mixed strategies, the expected payoffs and the price of anarchy corresponding to these equilibria. Then, we use this analysis to formulate a clustering mechanism (which we called Clustered Routing for Selfish Sensors--CROSS), that can be applied to sensor networks in practice. Comparing this mechanism to a popular clustering technique, we show via simulations that CROSS achieves a performance similar to that of a very popular clustering algorithm.

71 citations


Journal ArticleDOI
TL;DR: The proposed DIR protocol allows the mobile source and destination vehicles in the urban VANETs and outperforms existing solutions in terms of packet delivery ratio, data packet delay, and throughput.
Abstract: In this paper, we present a diagonal-intersection-based routing (DIR) protocol for vehicular ad hoc networks. The DIR protocol constructs a series of diagonal intersections between the source and destination vehicles. The DIR protocol is a geographic routing protocol. Based on the geographic routing protocol, source vehicle geographically forwards data packet toward the first diagonal intersection, second diagonal intersection, and so on, until the last diagonal intersection, and finally geographically reach to the destination vehicle. For given a pair of neighboring diagonal intersections, two or more disjoint sub-paths exist between them. The novel property of DIR protocol is the auto-adjustability, while the auto-adjustability is achieved that one sub-path with low data packet delay, between two neighboring diagonal intersections, is dynamically selected to forward data packets. To reduce the data packet delay, the route is automatically re-routed by the selected sub-path with lowest delay. The proposed DIR protocol allows the mobile source and destination vehicles in the urban VANETs. Experimental results show that the DIR protocol outperforms existing solutions in terms of packet delivery ratio, data packet delay, and throughput.

57 citations


Journal ArticleDOI
TL;DR: This paper studies some evolutionary games where competition between individuals from a large population occurs through many local interactions between randomly selected individuals and the effect of the time delays on the convergence of evolutionary dynamics to the ESS in an evolutionary game in which each pure strategy is associated with its own delay.
Abstract: We study in this paper some evolutionary games where competition between individuals from a large population occurs through many local interactions between randomly selected individuals We focus on games that have the property of possessing a single interior evolutionarily stable strategy (ESS) We study in particular the effect of the time delays on the convergence of evolutionary dynamics to the ESS in an evolutionary game in which each pure strategy is associated with its own delay In particular, we study a multiple access game as well as a Hawk and Dove game We study the properties of the ESS in these games and also the effect of time delays on the convergence of various bio-inspired evolutionary game dynamics to the ESS

36 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider two resource allocation games, where the mobile stations selfishly choose their power allocation policies in order to maximize their individual uplink transmission rates; in particular they can ignore some specified centralized policies.
Abstract: We consider wireless networks that can be modeled by multiple access channels in which all the terminals are equipped with multiple antennas. The propagation model used to account for the effects of transmit and receive antenna correlations is the unitary-invariant-unitary model, which is one of the most general models available in the literature. In this context, we introduce and analyze two resource allocation games. In both games, the mobile stations selfishly choose their power allocation policies in order to maximize their individual uplink transmission rates; in particular they can ignore some specified centralized policies. In the first game considered, the base station implements successive interference cancellation (SIC) and each mobile station chooses his best space-time power allocation scheme; here, a coordination mechanism is used to indicate to the users the order in which the receiver applies SIC. In the second framework, the base station is assumed to implement single-user decoding. For these two games a thorough analysis of the Nash equilibrium is provided: the existence and uniqueness issues are addressed; the corresponding power allocation policies are determined by exploiting random matrix theory; the sum-rate efficiency of the equilibrium is studied analytically in the low and high signal-to-noise ratio regimes and by simulations in more typical scenarios. Simulations show that, in particular, the sum-rate efficiency is high for the type of systems investigated and the performance loss due to the use of the proposed suboptimum coordination mechanism is very small.

33 citations


Journal ArticleDOI
TL;DR: Measurements of home users at a broadband wireless access service provider are performed, showing daily traffic fluctuations, flow statistics as well as application distributions, and a shift from web and Peer-to-Peer file sharing traffic to streaming applications is observed.
Abstract: Traffic characterization is an important means for Internet Service Providers (ISPs) to adapt and to optimize their networks to the requirements of the customers. Most network measurements are performed in the backbone of these ISPs, showing both, residential and business Internet traffic. However, the traffic characteristics of business and home users differ significantly. Therefore, we have performed measurements of home users at a broadband wireless access service provider in order to reflect only home user traffic characteristics. In this paper, we present the results of these measurements, showing daily traffic fluctuations, flow statistics as well as application distributions. The results show a difference to backbone traffic characteristics. Furthermore, we observed a shift from web and Peer-to-Peer (P2P) file sharing traffic to streaming applications.

29 citations


Journal ArticleDOI
TL;DR: This paper considers models and solving techniques for lexicographical optimization of two load balancing objective functions, and proposes a heuristic technique that can compute both feasible solutions and lower bounds for the addressed optimization problem.
Abstract: In telecommunication networks based on the current Ethernet technology, routing of traffic demands is based on multiple spanning trees: the network operator configures different routing spanning trees and assigns each demand to be routed in one of the selected spanning trees. A major optimization issue in this solution is the combined determination of (i) a set of appropriate spanning trees, and (ii) assignment of demands to the trees, in order to achieve an optimal load balancing on the links of the network. In this paper we consider models and solving techniques for lexicographical optimization of two load balancing objective functions. The first objective is the min-max optimization of the n worst link loads (with n up to the total number of network links), and the second objective is the minimization of the average link load (when n is smaller than the total number of network links). Besides exact methods, a heuristic technique that can compute both feasible solutions and lower bounds for the addressed optimization problem is proposed. Finally, we discuss effectiveness of different solution using results of a numerical study of realistic case studies.

Journal ArticleDOI
TL;DR: A palmprint based verification system which uses low-order Zernike moments of palmprint sub-images to verify user with the help of non-occluded regions and is robust to occlusion is proposed.
Abstract: This paper proposes a palmprint based verification system which uses low-order Zernike moments of palmprint sub-images. Euclidean distance is used to match the Zernike moments of corresponding sub-images of query and enrolled palmprints. These matching scores of sub-images are fused using a weighted fusion strategy. The proposed system can also classify the sub-image of palmprint into non-occluded or occluded region and verify user with the help of non-occluded regions. So it is robust to occlusion. The palmprint is extracted from the acquired hand image using a low cost flat bed scanner. A palmprint extraction procedure which is robust to hand translation and rotation on the scanner has been proposed. The system is tested on IITK, PolyU and CASIA databases of size 549, 5239 and 7752 hand images respectively. It performs with accuracy of more than 98%, and FAR, FRR less than 2% for all the databases.

Journal ArticleDOI
TL;DR: This work considers a priority classes of service (CoS) model and investigates how different buffer management strategies can be combined with drop and scheduling policies, to provide strict priority based services, or to provide custom allocation of network resources.
Abstract: Vehicular Delay-Tolerant Networking (VDTN) is a Delay-Tolerant Network (DTN) based architecture concept for transit networks, where vehicles movement and their bundle relaying service is opportunistically exploited to enable non-real time applications, under environments prone to connectivity disruptions, network partitions and potentially long delays. In VDTNs, network resources may be limited, for instance due to physical constraints of the network nodes. In order to be able to prioritize applications traffic according to its requirements in such constrained scenarios, traffic differentiation mechanisms must be introduced at the VDTN architecture. This work considers a priority classes of service (CoS) model and investigates how different buffer management strategies can be combined with drop and scheduling policies, to provide strict priority based services, or to provide custom allocation of network resources. The efficiency and tradeoffs of these proposals is evaluated through extensive simulation.

Journal ArticleDOI
TL;DR: A new Global System of Mobile Communications authentication protocol is proposed to improve some drawbacks of the current GSM authentication protocol for roaming users, and it does not only improve the drawbacks listed above but also fits the needs of roaming users.
Abstract: In this paper, a new Global System of Mobile Communications (GSM) authentication protocol is proposed to improve some drawbacks of the current GSM authentication protocol for roaming users including: (a) communication overhead between VLR; (b) huge bandwidth consumption between VLR and HLR; (c) storage space overhead in VLR; (d) overloaded in HLR with authentication of mobile stations; and (e) not supporting bilateral authentication. The main contribution of this paper is that it does not only improve the drawbacks listed above but also fits the needs of roaming users. In addition, the proposed protocol does not change the existing architecture of GSM, and the robustness of the proposed protocol is the same as that of the original GSM, which is based on security algorithms A3, A5, and A8.

Journal ArticleDOI
TL;DR: This paper applies Benders decomposition to the network design problem and develops a two-phase solution method that uses a number of improvements over the basic Benders algorithm, and presents promising results on randomly generated test problems.
Abstract: In this paper we consider a telecommunications network design problem allowing for multiple technologies. The problem arises in wide-area network and metro-area network design for which a combination of technologies may be necessary due to high traffic volumes, long-distance transmission, and design restrictions. The network design problem builds the best network to channel traffic between a set of origins and destinations, which requires selecting links, equipping them with fiber, deciding on the type of technology, and locating switches. The goal is to minimize the total cost of the network, which accounts for the flow cost, the fiber and technology costs, and the switch-location cost. We model the problem using a multicommodity network design formulation with side constraints. We apply Benders decomposition to the problem and develop a two-phase solution method that uses a number of improvements over the basic Benders algorithm. We present promising results on randomly generated test problems.

Journal ArticleDOI
TL;DR: An analytical model is proposed which yields approximate values for mean queue length and mean packet delay in an EPON using IPACT with Gated Service (GS) scheme under the assumption of heterogeneous Poisson arrivals and is shown to behave more efficiently than other QoS sensitive DBAs in the literature.
Abstract: The Ethernet PON (EPON) is viewed by many as an attractive solution to deliver very high-speed broadband access and is widely deployed in some geographical areas. While downstream traffic is broadcast to all customers, the access of upstream traffic to the fiber has to be arbitrated in order to avoid collisions. This arbitration mechanism and more generally, bandwidth distribution and QoS Provisioning, have been left to the implementer. One solution is to enforce static Time Division Multiplexing Access (TDMA) between end-users. This however precludes an efficient usage of resources. Interleaved Polling with Adaptive Cycle Time (IPACT) is one of the earliest proposed schemes for Dynamic Bandwidth Assignment (DBA) in EPON and has been extensively used as a benchmark by many subsequent allocation schemes. In this paper, we first propose an analytical model which yields approximate values for mean queue length and mean packet delay in an EPON using IPACT with Gated Service (GS) scheme under the assumption of heterogeneous Poisson arrivals. We use the model to demonstrate that all users experiment performance degradation in case of local overload, thus showing the necessity of correcting somehow IPACT-GS in order to avoid this phenomenon. This is achieved by designing a control plane for EPON, which includes a priority based DBA together with a framework for enforcing Service Level Agreements (SLAs), and fairly sharing available resources. The proposed framework is easily configured (all the control being centralized at the OLT or in the backbone) while allowing the support of large varieties of services. It is shown to behave more efficiently than other QoS sensitive DBAs in the literature.

Journal ArticleDOI
TL;DR: A design algorithm for networks with a restoration mechanism that provides failure-independent, end-to-end path protection to a set of given demands under a single link or node failure with a focus on optical networks is presented.
Abstract: This paper presents a design algorithm for networks with a restoration mechanism that provides failure-independent, end-to-end path protection to a set of given demands under a single link or node failure with a focus on optical networks. The restoration routes are provided on preconfigured cycles, where each of the demands is assigned a single restoration route and specific restoration wavelengths on a segment of one cycle (splitting is not allowed). The number of reserved restoration wavelengths may vary from one link to the next on a cycle; hence, we refer to these cycles as Preconfigured Virtual Cycles (PVCs). The network design algorithm consists of three major parts. The first part generates a large number of candidate PVCs. Our algorithm allows assignment of certain demands that have common failure scenarios to the same PVC. The second part selects a set of PVCs from among the candidates, attempting to minimize the total reserved restoration cost while ensuring that each demand is assigned to one PVC. This is achieved by solving a set covering problem followed by elimination of duplicate assignments. The third part resolves conflicts of wavelength assignments.

Journal ArticleDOI
Jose Simoes1, Thomas Magedanz1
TL;DR: This paper explains how the convergence of telecommunications and web services can be achieved, and demonstrates the potentialities of this architectural paradigm with a prototype service.
Abstract: Telecommunication and Internet services are constantly subject to changes, seeking the customer's full satisfaction. Enriching these services with innovative approaches such as context-aware, social, mobile, adaptable and interactive mechanisms enable users to experience a variety of personalized services seamlessly across different platforms and technologies. In this sense, Service Oriented Architectures play a central role in allowing component reuse and low cost service creation. Together with IP Multimedia Subsystem enable the convergence of telecommunications and web services, allowing the network transport technologies to be abstracted from the services above. By integrating these technologies, a number of synergies can be explored. Existing services can be easily enriched with context information, made available on a variety of networks and new services can be composed using previously existing building blocks. This paper explains how this integration can be achieved, and demonstrates the potentialities of this architectural paradigm with a prototype service.

Journal ArticleDOI
TL;DR: This paper proposes a new approach to be applied for preventing network congestion in AQM routers that includes a procedure for selecting the packet to be dropped that improves the fairness among different classes of flows and improves the VoIP traffic performance in terms of packet dropping probability, MOS, and intelligibility.
Abstract: The adoption of the IP protocol for serving diverse applications arises the need for mechanisms to prevent network congestion in scenarios with different traffic types (responsive and unresponsive) sharing limited network resources. To deal with this issue, a number of algorithms for active queue management (AQM) have been proposed. However, most of them do not observe the traffic type and usually disregard this knowledge. In this way, the provided service could not comply with the distinctive requirements of the different type of traffic, such as VoIP services, which demand bounded packet latency and loss rate. This paper proposes a new approach to be applied for preventing network congestion in AQM routers. Our scheme includes a procedure for selecting the packet to be dropped that improves the fairness among different classes of flows. We evaluate the use of this approach on distinct AQM schemes in scenarios with different degrees of UDP and TCP traffic mix. Objective and subjective performance measurements are reported. The experimental evaluation indicates that our approach improves the fairness among different traffic classes without using any packet scheduler. In fact, it also improves the VoIP traffic performance in terms of packet dropping probability, MOS (Mean Opinion Score) and intelligibility. We also show that our approach has no negative impact on the packet delay. Moreover, it is not achieved at the expense of TCP responsive traffic.

Journal ArticleDOI
TL;DR: It is demonstrated how robust routing settings with guaranteed performance for all foreseen traffic variations can be effectively computed via memory efficient iterative techniques and polynomial-time algorithms.
Abstract: Routing configurations that have been optimized for a nominal traffic scenario often display significant performance degradation when they are subjected to real network traffic. These degradations are due to the inherent sensitivity of classical optimization techniques to changes in model parameters combined with the significant traffic variations caused by demand fluctuations, component failures and network reconfigurations. In this paper, we review important sources for traffic variations in data networks and describe tractable models for capturing the associated traffic uncertainty. We demonstrate how robust routing settings with guaranteed performance for all foreseen traffic variations can be effectively computed via memory efficient iterative techniques and polynomial-time algorithms. The techniques are illustrated on real data from operational IP networks.

Journal ArticleDOI
TL;DR: Using a mathematical model, the optimal sleep window satisfying the required quality of service (QoS) on call setup delay and talker arbitration delay is found, and it is shown that the sleep mode and idle mode provide a considerable reduction on energy of the mobile stations.
Abstract: We investigate how much power saving can be achieved for a Push-To-Talk (PTT) service by employing sleep mode and idle mode in the IEEE 802.16e. In this paper, a mobile station by employing sleep mode during an on-session and idle mode during an off-session of the PTT service is modeled as a semi-Markov chain. We obtain power consumption of the mobile station, call setup delay and talker arbitration delay. Using our mathematical model, we can find the optimal sleep window satisfying the required quality of service (QoS) on call setup delay and talker arbitration delay. The numerical examples show that the sleep mode and idle mode provide a considerable reduction on energy of the mobile stations.

Journal ArticleDOI
TL;DR: By simultaneous implementation of the effective bandwidth algorithm in the data-link layer and adaptive modulation technique in the physical layer, the performance of the wireless network in terms of the number of rejected calls and system throughput improves.
Abstract: In this paper, we introduce an effective bandwidth-based call admission control (CAC) with adaptive modulation technique to manage the traffic in a wireless IP-based network. Furthermore, in order to efficiently use the physical resources of the network, we take advantage of an adaptive MQAM (M-ary quadrature amplitude modulation) to match transmission rates to the time-varying channel conditions. This cross-layer architecture has been examined for the self-similar traffic model. The results show that by simultaneous implementation of the effective bandwidth algorithm in the data-link layer and adaptive modulation technique in the physical layer, the performance of the wireless network in terms of the number of rejected calls and system throughput improves.

Journal ArticleDOI
TL;DR: The proposed general formulation on the interference management in OFDM wireless networks results in a joint transmit scheduling and dynamic sub-carrier and power allocation scheme, based on a fair and efficient framework while satisfying the delay requirements of real-time users.
Abstract: A study on interference management schemes in wireless multi-user networks is presented. We analyze the interference management problem in cellular networks and show that interference management is an optimization problem, for which we propose a general formulation. Using this general formulations we show that different interference management approaches are either exact or approximated solutions to this optimization problem. For each radio resource management technique, we provide a general overview and discuss its relation vis-a-vie other interference management techniques. As a case study, we then apply the proposed general formulation on the interference management in OFDM wireless networks and show that it results in a joint transmit scheduling and dynamic sub-carrier and power allocation scheme. A polynomial-time heuristic algorithm is also proposed to solve the formulated optimization problem. The distinguishing feature of the proposed scheme is that it gives in one shot, the transmission scheduling, the sub-carriers assigned to each user, and the power allocated to each sub-carrier, based on a fair and efficient framework while satisfying the delay requirements of real-time users.

Journal ArticleDOI
TL;DR: A novel chaotic FaceHashing scheme is proposed which preserves privacy of a biometric user and minimizes the system complexity with simple operations to attain the FaceHash.
Abstract: With the large-scale proliferation of biometric systems, privacy and irrevocability issues of their data have become a hot research issue. Recently, some researchers proposed a couple of schemes to generate BioHash, e.g. PalmHash, however, in this paper, we point out that the previous schemes are costly in terms of computational complexity and possession of USB tokens to generate pseudorandom numbers. To overcome the problems, we propose a novel chaotic FaceHashing scheme which preserves privacy of a biometric user. The presented scheme does not need users to posses USB tokens in generating pseudorandom sequences, which is a cost-effective solution. Besides, our scheme minimizes the system complexity with simple operations to attain the FaceHash. Experimental results show that the proposed scheme is efficient, secure, and revocable in case FaceHash is theft or compromised.

Journal ArticleDOI
TL;DR: This work investigates fair rate allocation for flows on paths in best path transfer (BPT) using SCTP multihoming, and proposes two distributed algorithms that can achieve the global optimums within reasonable convergence times.
Abstract: There are multiple paths between pairs of multihomed source and destination hosts. However, some Stream Control Transmission Protocol (SCTP) versions always try to use the best available path and funnel all traffic onto it, leaving the remaining ones for redundancy. We investigate fair rate allocation for flows on paths in best path transfer (BPT) using SCTP multihoming. Firstly, path capacity is defined and the common paths shared by several different sources are considered. Based on the idea of network utility maximization (NUM), the rate allocation model for BPT is presented, in which path reputation is used as a novel metric for the best path selection. In order to obtain the optimum of our model, two distributed algorithms are presented, and different fairness concepts can be achieved among competing sources if different utility functions are chosen accordingly. Simulation results confirm that the proposed algorithms can achieve the global optimums within reasonable convergence times.

Journal ArticleDOI
TL;DR: This paper proposes a method for traffic engineering that does not require sharing of important information across domains, and extends the idea of genetic algorithms to allow symbiotic evolution between two parties, and provides large reductions in network congestion.
Abstract: There are a group of problems in networking that can most naturally be described as optimization problems (network design, traffic engineering, etc.). There has been a great deal of research devoted to solving these problems, but this research has been concentrated on intra-domain problems where one network operator has complete information and control. An emerging field is inter-domain engineering, for instance, traffic engineering between large autonomous networks. Extending intra-domain optimization techniques to inter-domain problems is often impossible without the information available within a domain, and providers are often unwilling to share such information. This paper presents an alternative: we propose a method for traffic engineering that does not require sharing of important information across domains. The method extends the idea of genetic algorithms to allow symbiotic evolution between two parties. Both parties may improve their performance without revealing their data, other than what would be easily observed in any case. We show the method provides large reductions in network congestion, close to the optimal shortest path routing across a pair of networks. The results are highly robust to measurement noise, the method is very flexible, and it can be applied using existing routing.

Journal ArticleDOI
TL;DR: In this article, the authors discuss the potential of utility computing to greatly increase the efficiency of IT operations by sharing resources across multiple users, however, this sharing introduces complex problems with regards to resource sharing.
Abstract: Utility computing has the potential to greatly increase the efficiency of IT operations by sharing resources across multiple users. This sharing, however, introduces complex problems with regards t...

Journal ArticleDOI
TL;DR: A model for the overbooking strategy is described and evaluated, which takes into account both the value of the option, the correlated decisions taken by the prospective purchasers, and the penalty to be paid to the unsatisfied customers.
Abstract: Dynamic spectrum management makes it possible for the owner of usage rights on some frequency blocks to sublet each of them in real time and for a limited period of time. As a softer implementation with respect to the spot market a two stage assignment is here proposed through the use of options, which give buyers the right to purchase the usage right on a single block and for a timeslot. In the sale of options the primary owner may accomplish an overbooking strategy, which consists in selling more blocks than the available ones, and acts as a hedging tool against the risk of unsold blocks. A model for the overbooking strategy is described and evaluated, which takes into account both the value of the option, the correlated decisions taken by the prospective purchasers, and the penalty to be paid to the unsatisfied customers. The dependence of the economical convenience of the overbooking strategy on the relevant parameters (among which the penalty value and the overbooking ratio) is shown for a significant range of cases.

Journal ArticleDOI
TL;DR: Performance of the proposed algorithm is compared with two popular global optimization approaches, namely, genetic algorithm and particle swarm optimization method and empirical results clearly illustrate that the proposed approach performs very well for the tested high dimensional functions.
Abstract: This paper proposes a modified line search method which makes use of partial derivatives and re-starts the search process after a given number of iterations by modifying the boundaries based on the best solution obtained at the previous iteration (or set of iterations). Using several high dimensional benchmark functions, we illustrate that the proposed Line Search Re-Start (LSRS) approach is very suitable for high dimensional global optimization problems. Performance of the proposed algorithm is compared with two popular global optimization approaches, namely, genetic algorithm and particle swarm optimization method. Empirical results for up to 10,000 dimensions clearly illustrate that the proposed approach performs very well for the tested high dimensional functions.

Journal ArticleDOI
Josip Zoric1, Rolv Bræk
TL;DR: A scenario-driven approach for techno-business modeling and analysis based on a generic service platform model (GSPM) combined with scenario-based modeling of service portfolios and model-based mapping and projection techniques is presented.
Abstract: This work focuses on techno-business analysis of service platforms and service portfolios. Service platform (SP) hosts services and enabling service functionality. Service providers deliver two main products: end-user services to their customers and enablers to other business actors. 3rd party service providers combine enablers with their own functionality, wrap them in end-user services and deliver them to their customers. SPs are very complex systems technically as well as in user and business aspects. Ignoring or oversimplifying any aspect can decrease the realism and value of techno-business analyses. On the other side, including all the aspects would make the model too complex to be practically useful. We present in this paper a scenario-driven approach for techno-business modeling and analysis, which is sufficiently simple to be practical and complete enough to give realistic results. It is based on a generic service platform model (GSPM) (represented by ontology, structural and mathematical models) combined with scenario-based modeling of service portfolios and model-based mapping and projection techniques. It enables techno-business analysis at an early stage of service development to serve as a foundation for investment decisions. This analytical framework has been used in a series of practical cases. One of them, provision of mobile service bundles, is presented in this work. We discuss our experience and suggest future improvements in the proposed approach to model driven techno-business analysis.