Institution
Stevens Institute of Technology
Education•Hoboken, New Jersey, United States•
About: Stevens Institute of Technology is a education organization based out in Hoboken, New Jersey, United States. It is known for research contribution in the topics: Cognitive radio & Wireless network. The organization has 5440 authors who have published 12684 publications receiving 296875 citations. The organization is also known as: Stevens & Stevens Tech.
Papers published on a yearly basis
Papers
More filters
••
01 Jan 2002TL;DR: Denotational semantics for a Java-like language with pointers, subclassing and dynamic dispatch, class oriented visibility control, recursive types and methods, and privilege-based access control are given in this article.
Abstract: Denotational semantics is given for a Java-like language with pointers, subclassing and dynamic dispatch, class oriented visibility control, recursive types and methods, and privilege-based access control. Representation independence (relational parametricity) is proved, using a semantic notion of confinement similar to ones for which static disciplines have been recently proposed.
97 citations
••
97 citations
••
TL;DR: In this article, a general ARFIMA model capable of reproducing long and short-memory properties is fitted to the data and the conclusion is then based on the estimated parameters of the model.
Abstract: The present paper studies international stock indexes of the G-7 countries in the last 40 years. Evidence about the statistical memory of the returns is presented, and only in one country could the existence of long memory be sustained. These results contradict various previous studies that were based on the R/S analysis and consistently claimed the existence of long memory in financial returns. A general ARFIMA model capable of reproducing long- and short-memory properties is directly fitted to the data. The conclusion is then based on the estimated parameters of the model.
97 citations
••
TL;DR: In this paper, the role of various physical processes in controlling total water level was examined for the August 2011 tropical cyclone Irene and a March 2010 nor'easter that affected the New York City (NYC) metropolitan area.
Abstract: [1] Detailed simulations, comparisons with observations, and model sensitivity experiments are presented for the August 2011 tropical cyclone Irene and a March 2010 nor'easter that affected the New York City (NYC) metropolitan area. These storms brought strong winds, heavy rainfall, and the fourth and seventh highest gauged storm tides (total water level), respectively, at the Battery, NYC. To dissect the storm tides and examine the role of various physical processes in controlling total water level, a series of model experiments was performed where one process was omitted for each experiment, and results were studied for eight different tide stations. Neglecting remote meteorological forcing (beyond ∼250 km) led to typical reductions of 7–17% in peak storm tide, neglecting water density variations led to typical reductions of 1–13%, neglecting a parameterization that accounts for enhanced wind drag due to wave steepness led to typical reductions of 3–12%, and neglecting atmospheric pressure gradient forcing led to typical reductions of 3–11%. Neglecting freshwater inputs to the model domain led to reductions of 2% at the Battery and 9% at Piermont, 14 km up the Hudson River from NYC. Few storm surge modeling studies or operational forecasting systems incorporate the “estuary effects” of freshwater flows and water density variations, yet joint omission of these processes for Irene leads to a low-bias in storm tide for NYC sites like La Guardia and Newark Airports (9%) and the Battery (7%), as well as nearby vulnerable sites like the Indian Point nuclear plant (23%).
97 citations
••
09 Jul 2011TL;DR: By mapping messages into a large context, the authors can compute the distances between them, and then classify them, which yields more accurate classification of a set of Twitter messages than alternative techniques using string edit distance and latent semantic analysis.
Abstract: By mapping messages into a large context, we can compute the distances between them, and then classify them. We test this conjecture on Twitter messages: Messages are mapped onto their most similar Wikipedia pages, and the distances between pages are used as a proxy for the distances between messages. This technique yields more accurate classification of a set of Twitter messages than alternative techniques using string edit distance and latent semantic analysis.
97 citations
Authors
Showing all 5536 results
Name | H-index | Papers | Citations |
---|---|---|---|
Paul M. Thompson | 183 | 2271 | 146736 |
Roger Jones | 138 | 998 | 114061 |
Georgios B. Giannakis | 137 | 1321 | 73517 |
Li-Jun Wan | 113 | 639 | 52128 |
Joel L. Lebowitz | 101 | 754 | 39713 |
David Smith | 100 | 994 | 42271 |
Derong Liu | 77 | 608 | 19399 |
Robert R. Clancy | 77 | 293 | 18882 |
Karl H. Schoenbach | 75 | 494 | 19923 |
Robert M. Gray | 75 | 371 | 39221 |
Jin Yu | 74 | 480 | 32123 |
Sheng Chen | 71 | 688 | 27847 |
Hui Wu | 71 | 347 | 19666 |
Amir H. Gandomi | 67 | 375 | 22192 |
Haibo He | 66 | 482 | 22370 |