scispace - formally typeset
Search or ask a question
Topic

Testbed

About: Testbed is a research topic. Over the lifetime, 10858 publications have been published within this topic receiving 147147 citations. The topic is also known as: test bed.


Papers
More filters
Journal ArticleDOI
TL;DR: ComFIT is a development environment for the Internet of Things that was built grounded on the paradigms of model driven development and cloud computing, and supports automatic code generation, execution of simulations, and code compilation of applications for these platforms with low development effort.

41 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper evaluates and compares implementations of these two loadsharing protocols by using both lab measurements and intercontinental testbed realized via the Internet between Europe and China and highlights that the different path management strategies of the two protocols have a significant impact on their performance in real Internet scenarios.
Abstract: The market penetration of access devices with multiple network interfaces has increased dramatically over the last few years. As a consequence, there is a strong interest to use all of the available interfaces concurrently to improve data throughput. Corresponding extensions of established Transport protocols are receiving considerable attention within research and standardization. Currently two approaches are in the focus of the IETF: The Multipath TCP (MPTCP) extension for TCP and the Concurrent Multipath Transfer extension for SCTP (CMT-SCTP). This paper evaluates and compares implementations of these two loadsharing protocols by using both lab measurements and intercontinental testbed realized via the Internet between Europe and China. The experiments show that some performance critical aspects have not been taken into account in previous studies. Furthermore, they show that the simple scenario with two disjoint paths, which is typically used for evaluation, does not sufficiently cover the real Internet environment. Based on these insights, we highlight that the different path management strategies of the two protocols have a significant impact on their performance in real Internet scenarios.1

40 citations

Journal ArticleDOI
TL;DR: For the first time, the experimental data arising from this large-scale testbed using software-defined radios is modeled and its performance is derived using the central limit theorem that is recently obtained in the literature.
Abstract: This paper has two parts. The first one deals with how to use large random matrices as building blocks to model the massive data arising from the massive (or large-scale) multiple-input, multiple-output (MIMO) system. As a result, we apply this model for distributed spectrum sensing and network monitoring. The part boils down to the streaming, distributed massive data, for which a new algorithm is obtained and its performance is derived using the central limit theorem that is recently obtained in the literature. The second part deals with the large-scale testbed using software-defined radios (particularly, universal software radio peripheral) that takes us more than four years to develop this 70-node network testbed. To demonstrate the power of the software-defined radio, we reconfigure our testbed quickly into a testbed for massive MIMO. The massive data of this testbed are of central interest in this paper. For the first time, we have modeled the experimental data arising from this testbed. To our best knowledge, there is no other similar work.

40 citations

Dissertation
01 Jan 2005
TL;DR: This dissertation explores three systems designed to mask Internet failures, and, through a study of three years of data collected on a 31-site testbed, why these failures happen and how effectively they can be masked.
Abstract: The end-to-end availability of Internet services is between two and three orders of magnitude worse than other important engineered systems, including the US airline system, the 911 emergency response system, and the US public telephone system. This dissertation explores three systems designed to mask Internet failures, and, through a study of three years of data collected on a 31-site testbed, why these failures happen and how effectively they can be masked. A core aspect of many of the failures that interrupt end-to-end communication is that they fall outside the expected domain of well-behaved network failures. Many traditional techniques cope with link and router failures; as a result, the remaining failures are those caused by software and hardware bugs, misconfiguration, malice, or the inability of current routing systems to cope with persistent congestion. The effects of these failures are exacerbated because Internet services depend upon the proper functioning of many components—wide-area routing, access links, the domain name system, and the servers themselves—and a failure in any of them can prove disastrous to the proper functioning of the service. This dissertation describes three complementary systems to increase Internet availability in the face of such failures. Each system builds upon the idea of an overlay network, a network created dynamically between a group of cooperating Internet hosts. The first two systems, Resilient Overlay Networks (RON) and Multi-homed Overlay Networks (MONET) determine whether the Internet path between two hosts is working on an end-to-end basis. Both systems exploit the considerable redundancy available in the underlying Internet to find failure-disjoint paths between nodes, and forward traffic along a working path. RON is able to avoid 50% of the Internet outages that interrupt communication between a small group of communicating nodes. MONET is more aggressive, combining an overlay network of Web proxies with explicitly engineered redundant links to the Internet to also mask client access link failures. Eighteen months of measurements from a six-site deployment of MONET show that it increases a client's ability to access working Web sites by nearly an order of magnitude. Where RON and MONET combat accidental failures, the I system guards against denial-of-service attacks by surrounding a vulnerable Internet server with a ring of filtering routers. Mayday then uses a set of overlay nodes to act as mediators between the service and its clients, permitting only properly authenticated traffic to reach the server. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

40 citations

Proceedings ArticleDOI
05 Dec 1999
TL;DR: This paper describes the practical systems issues encountered in building a smoothing proxy service using off-the-shelf components, in the context of an MPEG-2/RTP streaming testbed.
Abstract: Provisioning network resources for multimedia streaming is complicated by the bursty, high-bandwidth traffic introduced by compressed video, as well as the variability of the throughput, delay, and loss properties of the Internet, and the lack of end-to-end control by any one service provider. To address these problems, we propose that proxies should perform online smoothing by transmitting frames into the client playback buffer in advance of each burst, to reduce network resource requirements without degradation in video quality. This paper describes the practical systems issues we have encountered in building a smoothing proxy service using off-the-shelf components, in the context of an MPEG-2/RTP streaming testbed.

40 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Wireless sensor network
142K papers, 2.4M citations
92% related
Server
79.5K papers, 1.4M citations
92% related
Wireless network
122.5K papers, 2.1M citations
92% related
Wireless
133.4K papers, 1.9M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023917
20222,046
2021499
2020590
2019693
2018639