Andre Mendes Cavalcante
Other affiliations: Federal University of Pará
Bio: Andre Mendes Cavalcante is an academic researcher from Nokia. The author has contributed to research in topics: Computer science & MIMO. The author has an hindex of 11, co-authored 32 publications receiving 1156 citations. Previous affiliations of Andre Mendes Cavalcante include Federal University of Pará.
••09 Jun 2013
TL;DR: This paper considers two of the most prominent wireless technologies available today, namely Long Term Evolution (LTE), and WiFi, and addresses some problems that arise from their coexistence in the same band, and proposes a simple coexistence scheme that reuses the concept of almost blank subframes in LTE.
Abstract: The recent development of regulatory policies that permit the use of TV bands spectrum on a secondary basis has motivated discussion about coexistence of primary (e.g. TV broadcasts) and secondary users (e.g. WiFi users in TV spectrum). However, much less attention has been given to coexistence of different secondary wireless technologies in the TV white spaces. Lack of coordination between secondary networks may create severe interference situations, resulting in less efficient usage of the spectrum. In this paper, we consider two of the most prominent wireless technologies available today, namely Long Term Evolution (LTE), and WiFi, and address some problems that arise from their coexistence in the same band. We perform exhaustive system simulations and observe that WiFi is hampered much more significantly than LTE in coexistence scenarios. A simple coexistence scheme that reuses the concept of almost blank subframes in LTE is proposed, and it is observed that it can improve the WiFi throughput per user up to 50 times in the studied scenarios.
••02 Jun 2013
TL;DR: A simulator-based system- level analysis in order to assess the network performance in an office scenario shows that LTE system performance is slightly affected by coexistence whereas Wi-Fi is significantly impacted by LTE transmissions.
Abstract: The deployment of modern mobile systems has faced severe challenges due to the current spectrum scarcity. The situation has been further worsened by the development of different wireless technologies and standards that can be used in the same frequency band. Furthermore, the usage of smaller cells (e.g. pico, femto and wireless LAN), coexistence among heterogeneous networks (including amongst different wireless technologies such as LTE and Wi-Fi deployed in the same frequency band) has been a big field of research in the academy and industry. In this paper, we provide a performance evaluation of coexistence between LTE and Wi-Fi systems and show some of the challenges faced by the different technologies. We focus on a simulator-based system- level analysis in order to assess the network performance in an office scenario. Simulation results show that LTE system performance is slightly affected by coexistence whereas Wi-Fi is significantly impacted by LTE transmissions. In coexistence, the Wi-Fi channel is most often blocked by LTE interference, making the Wi-Fi nodes to stay on the LISTEN mode more than 96% of the time. This reflects directly on the Wi-Fi user throughput, that decreases from 70% to ≈100% depending on the scenario. Finally, some of the main issues that limit the LTE/Wi-Fi coexistence and some pointers on the mutual interference management of both the systems are provided.
TL;DR: The issues that arise from the concurrent operation of LTE and Wi-Fi in the same unlicensed bands from the point of view of radio resource management are discussed and it is shown that Wi-fi is severely impacted by LTE transmissions.
Abstract: The expansion of wireless broadband access network deployments is resulting in increased scarcity of available radio spectrum. It is very likely that in the near future, cellular technologies and wireless local area networks will need to coexist in the same unlicensed bands. However, the two most prominent technologies, LTE and Wi-Fi, were designed to work in different bands and not to coexist in a shared band. In this article, we discuss the issues that arise from the concurrent operation of LTE and Wi-Fi in the same unlicensed bands from the point of view of radio resource management. We show that Wi-Fi is severely impacted by LTE transmissions; hence, the coexistence of LTE and Wi-Fi needs to be carefully investigated. We discuss some possible coexistence mechanisms and future research directions that may lead to successful joint deployment of LTE and Wi-Fi in the same unlicensed band.
••01 Sep 2013
TL;DR: The proposed LTE UL power control with interference aware power operating point is a flexible tool to deal with the trade-off between LTE and Wi-Fi performances in coexistence since it is able to set different LTE/Wi-Fi coexistence configurations with the choice of a single parameter.
Abstract: Spectrum sharing is a powerful alternative to deal with the exponential increase on the wireless communication capacity demand In this context, the coexistence of two of the most prominent wireless technologies today, Long Term Evolution (LTE) and Wi-Fi, is an important research topic In the most common Wi-Fi network operation, the Distributed Coordination Function (DCF), communication nodes access the channel only if the interference level is below a certain threshold Then, Wi-Fi operation is severely affected when in coexistence with LTE This paper proposes the use of LTE uplink (UL) power control to improve LTE/Wi-Fi coexistence With the introduction of an additional factor to the conventional LTE UL power control, a controlled decrease of LTE UL transmit powers is carried out according to interference measurements, giving opportunity to Wi-Fi transmissions The proposed LTE UL power control with interference aware power operating point is a flexible tool to deal with the trade-off between LTE and Wi-Fi performances in coexistence, since it is able to set different LTE/Wi-Fi coexistence configurations with the choice of a single parameter Simulation results show that the proposed approach can provide similar or better performance for both LTE and Wi-Fi networks than a previously proposed interference avoidance mechanism
TL;DR: The design, implementation, and evaluation of Molé are described, a mobile organic localization engine that employs several new techniques, including a new statistical positioning algorithm to differentiate between neighboring places, a motion detector to reduce update lag, and a scalable “cloud”-based fingerprint distribution system.
Abstract: We describe the design, implementation, and evaluation of Mole, a mobile organic localisation engine. Unlike previous work on crowd-sourced WiFi positioning, Mole uses a hierarchical name space. By not relying on a map and by being more strict than uninterpreted names for places, Mole aims for a more flexible and scalable point in the design space of localisation systems. Mole employs several new techniques, including a new statistical positioning algorithm to differentiate between neighbouring places, a motion detector to reduce update lag, and a scalable ‘cloud’-based fingerprint distribution system. Mole's localisation algorithm, called Maximum Overlap MAO, accounts for temporal variations in a place's fingerprint in a principled manner. It also allows for aggregation of fingerprints from many users and is compact enough for on-device storage. We show through end-to-end experiments in two deployments that MAO is significantly more accurate than state-of-the-art Bayesian-based localisers. We also show that non-experts can use Mole to quickly survey a building, enabling room-grained location-based services for themselves and others.
01 Nov 2002
TL;DR: Drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short), which aims to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved.
Abstract: From the Book: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). In Test-Driven Development, you: Write new code only if you first have a failing automated test.Eliminate duplication. Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy The two rules imply an order to the tasks ofprogramming: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first 2. Green—make the test work quickly, committing whatever sins necessary in the process 3. Refactor—eliminate all the duplication created in just getting the test to work Red/green/refactor. The TDD’s mantra. Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. Courage Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. Once you are finished reading this book, you should be ready to: Start simplyWrite automated testsRefactor to add design decisions one at a time This book is organized into three sections. An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought.
01 Jan 2007
TL;DR: In this paper, the authors provide updates to IEEE 802.16's MIB for the MAC, PHY and asso-ciated management procedures in order to accommodate recent extensions to the standard.
Abstract: This document provides updates to IEEE Std 802.16's MIB for the MAC, PHY and asso- ciated management procedures in order to accommodate recent extensions to the standard.
TL;DR: This survey overviews recent advances on two major areas of Wi-Fi fingerprint localization: advanced localization techniques and efficient system deployment.
Abstract: The growing commercial interest in indoor location-based services (ILBS) has spurred recent development of many indoor positioning techniques. Due to the absence of global positioning system (GPS) signal, many other signals have been proposed for indoor usage. Among them, Wi-Fi (802.11) emerges as a promising one due to the pervasive deployment of wireless LANs (WLANs). In particular, Wi-Fi fingerprinting has been attracting much attention recently because it does not require line-of-sight measurement of access points (APs) and achieves high applicability in complex indoor environment. This survey overviews recent advances on two major areas of Wi-Fi fingerprint localization: advanced localization techniques and efficient system deployment. Regarding advanced techniques to localize users, we present how to make use of temporal or spatial signal patterns, user collaboration, and motion sensors. Regarding efficient system deployment, we discuss recent advances on reducing offline labor-intensive survey, adapting to fingerprint changes, calibrating heterogeneous devices for signal collection, and achieving energy efficiency for smartphones. We study and compare the approaches through our deployment experiences, and discuss some future directions.
TL;DR: In this paper, the potential gains and limitations of network densification and spectral efficiency enhancement techniques in ultra-dense small cell deployments are analyzed. And the top ten challenges to be addressed to bring ultra dense small-cell deployments to reality are discussed.
Abstract: Today's heterogeneous networks comprised of mostly macrocells and indoor small cells will not be able to meet the upcoming traffic demands. Indeed, it is forecasted that at least a $100\times$ network capacity increase will be required to meet the traffic demands in 2020. As a result, vendors and operators are now looking at using every tool at hand to improve network capacity. In this epic campaign, three paradigms are noteworthy, i.e., network densification, the use of higher frequency bands and spectral efficiency enhancement techniques. This paper aims at bringing further common understanding and analysing the potential gains and limitations of these three paradigms, together with the impact of idle mode capabilities at the small cells as well as the user equipment density and distribution in outdoor scenarios. Special attention is paid to network densification and its implications when transiting to ultra-dense small cell deployments. Simulation results show that comparing to the baseline case with an average inter site distance of 200 m and a 100 MHz bandwidth, network densification with an average inter site distance of 35 m can increase the average UE throughput by $7.56\times$ , while the use of the 10 GHz band with a 500 MHz bandwidth can further increase the network capacity up to $5\times$ , resulting in an average of 1.27 Gbps per UE. The use of beamforming with up to 4 antennas per small cell BS lacks behind with average throughput gains around 30% and cell-edge throughput gains of up to $2\times$ . Considering an extreme densification, an average inter site distance of 5 m can increase the average and cell-edge UE throughput by $18\times$ and $48\times$ , respectively. Our study also shows how network densification reduces multi-user diversity, and thus proportional fair alike schedulers start losing their advantages with respect to round robin ones. The energy efficiency of these ultra-dense small cell deployments is also analysed, indicating the benefits of energy harvesting approaches to make these deployments more energy-efficient. Finally, the top ten challenges to be addressed to bring ultra-dense small cell deployments to reality are also discussed.
TL;DR: Simulation results show that the proposed network architecture and interference avoidance schemes can significantly increase the capacity of 4G heterogeneous cellular networks while maintaining the service quality of Wi-Fi systems.
Abstract: As two major players in terrestrial wireless communications, Wi-Fi systems and cellular networks have different origins and have largely evolved separately. Motivated by the exponentially increasing wireless data demand, cellular networks are evolving towards a heterogeneous and small cell network architecture, wherein small cells are expected to provide very high capacity. However, due to the limited licensed spectrum for cellular networks, any effort to achieve capacity growth through network densification will face the challenge of severe inter-cell interference. In view of this, recent standardization developments have started to consider the opportunities for cellular networks to use the unlicensed spectrum bands, including the 2.4 GHz and 5 GHz bands that are currently used by Wi-Fi, Zigbee and some other communication systems. In this article, we look into the coexistence of Wi-Fi and 4G cellular networks sharing the unlicensed spectrum. We introduce a network architecture where small cells use the same unlicensed spectrum that Wi-Fi systems operate in without affecting the performance of Wi-Fi systems. We present an almost blank subframe (ABS) scheme without priority to mitigate the co-channel interference from small cells to Wi-Fi systems, and propose an interference avoidance scheme based on small cells estimating the density of nearby Wi-Fi access points to facilitate their coexistence while sharing the same unlicensed spectrum. Simulation results show that the proposed network architecture and interference avoidance schemes can significantly increase the capacity of 4G heterogeneous cellular networks while maintaining the service quality of Wi-Fi systems.