scispace - formally typeset
Search or ask a question
Institution

Cisco Systems, Inc.

CompanyHong Kong, China
About: Cisco Systems, Inc. is a company organization based out in Hong Kong, China. It is known for research contribution in the topics: Network packet & Node (networking). The organization has 13783 authors who have published 18954 publications receiving 471217 citations.


Papers
More filters
Proceedings ArticleDOI

[...]

17 Aug 2012
TL;DR: This paper argues that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Abstract: Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).

3,845 citations

Journal ArticleDOI

[...]

TL;DR: This paper demonstrates the benefits of cache sharing, measures the overhead of the existing protocols, and proposes a new protocol called "summary cache", which reduces the number of intercache protocol messages, reduces the bandwidth consumption, and eliminates 30% to 95% of the protocol CPU overhead, all while maintaining almost the same cache hit ratios as ICP.
Abstract: The sharing of caches among Web proxies is an important technique to reduce Web traffic and alleviate network bottlenecks. Nevertheless it is not widely deployed due to the overhead of existing protocols. In this paper we demonstrate the benefits of cache sharing, measure the overhead of the existing protocols, and propose a new protocol called "summary cache". In this new protocol, each proxy keeps a summary of the cache directory of each participating proxy, and checks these summaries for potential hits before sending any queries. Two factors contribute to our protocol's low overhead: the summaries are updated only periodically, and the directory representations are very economical, as low as 8 bits per entry. Using trace-driven simulations and a prototype implementation, we show that, compared to existing protocols such as the Internet cache protocol (ICP), summary cache reduces the number of intercache protocol messages by a factor of 25 to 60, reduces the bandwidth consumption by over 50%, eliminates 30% to 95% of the protocol CPU overhead, all while maintaining almost the same cache hit ratios as ICP. Hence summary cache scales to a large number of proxies. (This paper is a revision of Fan et al. 1998; we add more data and analysis in this version.).

2,077 citations

Journal ArticleDOI

[...]

TL;DR: Experimental results show that the proposed diamond search (DS) algorithm is better than the four-step search (4SS) and block-based gradient descent search (BBGDS), in terms of mean-square error performance and required number of search points.
Abstract: Based on the study of motion vector distribution from several commonly used test image sequences, a new diamond search (DS) algorithm for fast block-matching motion estimation (BMME) is proposed in this paper. Simulation results demonstrate that the proposed DS algorithm greatly outperforms the well-known three-step search (TSS) algorithm. Compared with the new three-step search (NTSS) algorithm, the DS algorithm achieves close performance but requires less computation by up to 22% on average. Experimental results also show that the DS algorithm is better than the four-step search (4SS) and block-based gradient descent search (BBGDS), in terms of mean-square error performance and required number of search points.

1,924 citations

Book

[...]

08 Sep 2009
Abstract: 21st Century Skills (ISBN 978-0-470-47538-6) was published in 2009 by Jossey-Bass, San Francisco, California, United States. The book has a total of 206+xxxi pages. The authors of the book are Bernie Trilling and Charles Fadel. Bernie Trilling is founder and CEO of 21st Century Learning Advisors, and the former global director of the Oracle Education Foundation. He has worked on various pioneering educational products and services and he is an active member of a number of organizations dedicated to bringing 21st century learning methods to students and teachers around the world. Charles Fadel is founder and chairman of the Center for Curriculum Redesign and the Fondation Helvetica Education, and the former Global Education Lead at Cisco Systems. He has engaged with a wide variety of education ministries or boards and has worked on education projects in more than thirty countries.

1,722 citations

Proceedings ArticleDOI

[...]

22 Jun 2002
TL;DR: This paper proposes a query algorithm based on multiple random walks that resolves queries almost as quickly as Gnutella's flooding method while reducing the network traffic by two orders of magnitude in many cases.
Abstract: Decentralized and unstructured peer-to-peer networks such as Gnutella are attractive for certain applications because they require no centralized directories and no precise control over network topology or data placement. However, the flooding-based query algorithm used in Gnutella does not scale; each query generates a large amount of traffic and large systems quickly become overwhelmed by the query-induced load. This paper explores, through simulation, various alternatives to Gnutella's query algorithm, data replication strategy, and network topology. We propose a query algorithm based on multiple random walks that resolves queries almost as quickly as Gnutella's flooding method while reducing the network traffic by two orders of magnitude in many cases. We also present simulation results on a distributed replication strategy proposed in [8]. Finally, we find that among the various network topologies we consider, uniform random graphs yield the best performance.

1,707 citations


Authors

Showing all 13783 results

NameH-indexPapersCitations
Robert W. Heath128104973171
David E. Culler11642976131
Sandeep Kumar94156338652
Nick McKeown9022644403
David J. DeWitt8722431134
J.J. Garcia-Luna-Aceves8660225151
Richard J. Havel8622057955
George Varghese8425328598
John P. Kane7630220148
Bhaskar Krishnamachari7446424003
Christopher P. Ames7371319319
Henry Fuchs6932317663
Sunil Gupta6944033856
Zhijun Li6861414518
David W. Chang6735316969
Network Information
Related Institutions (5)
Intel
68.8K papers, 1.6M citations

91% related

Hewlett-Packard
59.8K papers, 1.4M citations

89% related

IBM
253.9K papers, 7.4M citations

88% related

Alcatel-Lucent
53.3K papers, 1.4M citations

88% related

Samsung
163.6K papers, 2M citations

87% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2021350
2020732
2019709
2018685
2017613
2016777