scispace - formally typeset
Search or ask a question
Institution

Stevens Institute of Technology

EducationHoboken, New Jersey, United States
About: Stevens Institute of Technology is a education organization based out in Hoboken, New Jersey, United States. It is known for research contribution in the topics: Cognitive radio & Wireless network. The organization has 5440 authors who have published 12684 publications receiving 296875 citations. The organization is also known as: Stevens & Stevens Tech.


Papers
More filters
Proceedings ArticleDOI
01 Jan 2002
TL;DR: Denotational semantics for a Java-like language with pointers, subclassing and dynamic dispatch, class oriented visibility control, recursive types and methods, and privilege-based access control are given in this article.
Abstract: Denotational semantics is given for a Java-like language with pointers, subclassing and dynamic dispatch, class oriented visibility control, recursive types and methods, and privilege-based access control. Representation independence (relational parametricity) is proved, using a semantic notion of confinement similar to ones for which static disciplines have been recently proposed.

97 citations

Journal ArticleDOI
TL;DR: In this article, a general ARFIMA model capable of reproducing long and short-memory properties is fitted to the data and the conclusion is then based on the estimated parameters of the model.
Abstract: The present paper studies international stock indexes of the G-7 countries in the last 40 years. Evidence about the statistical memory of the returns is presented, and only in one country could the existence of long memory be sustained. These results contradict various previous studies that were based on the R/S analysis and consistently claimed the existence of long memory in financial returns. A general ARFIMA model capable of reproducing long- and short-memory properties is directly fitted to the data. The conclusion is then based on the estimated parameters of the model.

97 citations

Journal ArticleDOI
TL;DR: In this paper, the role of various physical processes in controlling total water level was examined for the August 2011 tropical cyclone Irene and a March 2010 nor'easter that affected the New York City (NYC) metropolitan area.
Abstract: [1] Detailed simulations, comparisons with observations, and model sensitivity experiments are presented for the August 2011 tropical cyclone Irene and a March 2010 nor'easter that affected the New York City (NYC) metropolitan area. These storms brought strong winds, heavy rainfall, and the fourth and seventh highest gauged storm tides (total water level), respectively, at the Battery, NYC. To dissect the storm tides and examine the role of various physical processes in controlling total water level, a series of model experiments was performed where one process was omitted for each experiment, and results were studied for eight different tide stations. Neglecting remote meteorological forcing (beyond ∼250 km) led to typical reductions of 7–17% in peak storm tide, neglecting water density variations led to typical reductions of 1–13%, neglecting a parameterization that accounts for enhanced wind drag due to wave steepness led to typical reductions of 3–12%, and neglecting atmospheric pressure gradient forcing led to typical reductions of 3–11%. Neglecting freshwater inputs to the model domain led to reductions of 2% at the Battery and 9% at Piermont, 14 km up the Hudson River from NYC. Few storm surge modeling studies or operational forecasting systems incorporate the “estuary effects” of freshwater flows and water density variations, yet joint omission of these processes for Irene leads to a low-bias in storm tide for NYC sites like La Guardia and Newark Airports (9%) and the Battery (7%), as well as nearby vulnerable sites like the Indian Point nuclear plant (23%).

97 citations

Book ChapterDOI
09 Jul 2011
TL;DR: By mapping messages into a large context, the authors can compute the distances between them, and then classify them, which yields more accurate classification of a set of Twitter messages than alternative techniques using string edit distance and latent semantic analysis.
Abstract: By mapping messages into a large context, we can compute the distances between them, and then classify them. We test this conjecture on Twitter messages: Messages are mapped onto their most similar Wikipedia pages, and the distances between pages are used as a proxy for the distances between messages. This technique yields more accurate classification of a set of Twitter messages than alternative techniques using string edit distance and latent semantic analysis.

97 citations


Authors

Showing all 5536 results

NameH-indexPapersCitations
Paul M. Thompson1832271146736
Roger Jones138998114061
Georgios B. Giannakis137132173517
Li-Jun Wan11363952128
Joel L. Lebowitz10175439713
David Smith10099442271
Derong Liu7760819399
Robert R. Clancy7729318882
Karl H. Schoenbach7549419923
Robert M. Gray7537139221
Jin Yu7448032123
Sheng Chen7168827847
Hui Wu7134719666
Amir H. Gandomi6737522192
Haibo He6648222370
Network Information
Related Institutions (5)
Georgia Institute of Technology
119K papers, 4.6M citations

94% related

Nanyang Technological University
112.8K papers, 3.2M citations

92% related

Massachusetts Institute of Technology
268K papers, 18.2M citations

91% related

University of Maryland, College Park
155.9K papers, 7.2M citations

91% related

Purdue University
163.5K papers, 5.7M citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202342
2022139
2021765
2020820
2019799
2018563