Institution
Stevens Institute of Technology
Education•Hoboken, New Jersey, United States•
About: Stevens Institute of Technology is a education organization based out in Hoboken, New Jersey, United States. It is known for research contribution in the topics: Computer science & Cognitive radio. The organization has 5440 authors who have published 12684 publications receiving 296875 citations. The organization is also known as: Stevens & Stevens Tech.
Topics: Computer science, Cognitive radio, Communication channel, Wireless network, Artificial neural network
Papers published on a yearly basis
Papers
More filters
••
22 Aug 2004TL;DR: An efficient and privacy-preserving version of the K2 algorithm is given to construct the structure of a Bayesian network for the parties' joint data on the combination of their databases without revealing anything about their data to each other.
Abstract: As more and more activities are carried out using computers and computer networks, the amount of potentially sensitive data stored by business, governments, and other parties increases. Different parties may wish to benefit from cooperative use of their data, but privacy regulations and other privacy concerns may prevent the parties from sharing their data. Privacy-preserving data mining provides a solution by creating distributed data mining algorithms in which the underlying data is not revealed.In this paper, we present a privacy-preserving protocol for a particular data mining task: learning the Bayesian network structure for distributed heterogeneous data. In this setting, two parties owning confidential databases wish to learn the structure of Bayesian network on the combination of their databases without revealing anything about their data to each other. We give an efficient and privacy-preserving version of the K2 algorithm to construct the structure of a Bayesian network for the parties' joint data.
234 citations
•
TL;DR: In this article, a step-by-step guide is presented that leads managers through the four components of the technological base: 1. technological assets, 2. organizational assets, 3. external assets, and 4. project management.
Abstract: A common mistake in assessing an organization's technological base is narrowing the review to matters of technical competence. Managers need a framework for assessing the much broader question of how well their organizations are positioned to derive competitive advantage from technology. A step-by-step guide is presented that leads managers through the 4 components of the technological base: 1. technological assets, 2. organizational assets, 3. external assets, and 4. project management. Case studies of organizations in the defense industry illustrate how 2 companies' strategies for moving into a new business were shaped by the strengths and weaknesses of their respective technological bases. Of the 4 dimensions, it is usually the organizational assets that prove to be the limiting element. A hierarchy was found among the organizational assets: 1. skills, 2. procedures, 3. structure, 4. strategy, and 5. culture.
233 citations
••
23 Jun 2013TL;DR: This work proposes a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy.
Abstract: Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic matching method. We take a part based representation by extracting local features (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each feature with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of all face images in the training corpus. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms. Each Gaussian component builds correspondence of a pair of features to be matched between two faces/face tracks. For face verification, we train an SVM on the vector concatenating the difference vectors of all the feature pairs to decide if a pair of faces/face tracks is matched or not. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy. Our experiments show that our method outperforms the state-of-the-art in the most restricted protocol on Labeled Face in the Wild (LFW) and the YouTube video face database by a significant margin.
232 citations
••
TL;DR: In this paper, the effect of breaking waves on ocean surface temperatures and surface boundary layer deepening is investigated, and the modification of the Mellor-Yamada turbulence closure model by Craig and Banner and others to include surface wave breaking energetics reduces summertime surface temperatures when the surface layer is relatively shallow.
Abstract: The effect of breaking waves on ocean surface temperatures and surface boundary layer deepening is investigated. The modification of the Mellor‐Yamada turbulence closure model by Craig and Banner and others to include surface wave breaking energetics reduces summertime surface temperatures when the surface layer is relatively shallow. The effect of the Charnock constant in the relevant drag coefficient relation is also studied.
229 citations
••
15 Oct 2018
TL;DR: LEMNA is proposed, a high-fidelity explanation method dedicated for security applications that approximate a local area of the complex deep learning decision boundary using a simple interpretable model and has a much higher fidelity level compared to existing methods.
Abstract: While deep learning has shown a great potential in various domains, the lack of transparency has limited its application in security or safety-critical areas. Existing research has attempted to develop explanation techniques to provide interpretable explanations for each classification decision. Unfortunately, current methods are optimized for non-security tasks ( e.g., image analysis). Their key assumptions are often violated in security applications, leading to a poor explanation fidelity. In this paper, we propose LEMNA, a high-fidelity explanation method dedicated for security applications. Given an input data sample, LEMNA generates a small set of interpretable features to explain how the input sample is classified. The core idea is to approximate a local area of the complex deep learning decision boundary using a simple interpretable model. The local interpretable model is specially designed to (1) handle feature dependency to better work with security applications ( e.g., binary code analysis); and (2) handle nonlinear local boundaries to boost explanation fidelity. We evaluate our system using two popular deep learning applications in security (a malware classifier, and a function start detector for binary reverse-engineering). Extensive evaluations show that LEMNA's explanation has a much higher fidelity level compared to existing methods. In addition, we demonstrate practical use cases of LEMNA to help machine learning developers to validate model behavior, troubleshoot classification errors, and automatically patch the errors of the target models.
229 citations
Authors
Showing all 5536 results
Name | H-index | Papers | Citations |
---|---|---|---|
Paul M. Thompson | 183 | 2271 | 146736 |
Roger Jones | 138 | 998 | 114061 |
Georgios B. Giannakis | 137 | 1321 | 73517 |
Li-Jun Wan | 113 | 639 | 52128 |
Joel L. Lebowitz | 101 | 754 | 39713 |
David Smith | 100 | 994 | 42271 |
Derong Liu | 77 | 608 | 19399 |
Robert R. Clancy | 77 | 293 | 18882 |
Karl H. Schoenbach | 75 | 494 | 19923 |
Robert M. Gray | 75 | 371 | 39221 |
Jin Yu | 74 | 480 | 32123 |
Sheng Chen | 71 | 688 | 27847 |
Hui Wu | 71 | 347 | 19666 |
Amir H. Gandomi | 67 | 375 | 22192 |
Haibo He | 66 | 482 | 22370 |