Institution
University of Massachusetts Lowell
Education•Lowell, Massachusetts, United States•
About: University of Massachusetts Lowell is a education organization based out in Lowell, Massachusetts, United States. It is known for research contribution in the topics: Population & Poison control. The organization has 5533 authors who have published 12640 publications receiving 306181 citations. The organization is also known as: UMass Lowell & UML.
Papers published on a yearly basis
Papers
More filters
07 Jun 2015
TL;DR: A novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and shows such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
Abstract: Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they can be compositional in spatial and temporal “layers”. Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
4,206 citations
Posted Content•
TL;DR: A novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and shows such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
Abstract: Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep"' in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
3,935 citations
University of Vermont1, Delft University of Technology2, Northwestern University3, Vrije Universiteit Brussel4, University of British Columbia5, Leiden University Medical Center6, University of Oregon7, University of Maryland, Baltimore8, State University of New York Upstate Medical University9, University of Massachusetts Lowell10
TL;DR: A definition of a joint coordinate system (JCS) for the shoulder, elbow, wrist, and hand is proposed and a standard for the local axis system in each articulating segment or bone is generated.
Abstract: In this communication, the Standardization and Terminology Committee (STC) of the International Society of Biomechanics proposes a definition of a joint coordinate system (JCS) for the shoulder, elbow, wrist, and hand. For each joint, a standard for the local axis system in each articulating segment or bone is generated. These axes then standardize the JCS. The STC is publishing these recommendations so as to encourage their use, to stimulate feedback and discussion, and to facilitate further revisions. Adopting these standards will lead to better communication among researchers and clinicians.
3,866 citations
TL;DR: The A-type granitoids can be divided into two chemical groups as mentioned in this paper : oceanic-island basalts and island-arc basalts, and these two types have very different sources and tectonic settings.
Abstract: The A-type granitoids can be divided into two chemical groups. The first group (A1) is characterized by element ratios similar to those observed for oceanic-island basalts. The second group (A2) is characterized by ratios that vary from those observed for continental crust to those observed for island-arc basalts. It is proposed that these two types have very different sources and tectonic settings. The A1 group represents differentiates of magmas derived from sources like those of oceanic-island basalts but emplaced in continental rifts or during intraplate magmatism. The A2 group represents magmas derived from continental crust or underplated crust that has been through a cycle of continent-continent collision or island-arc magmatism.
2,043 citations
Proceedings Article•
10 Dec 2014
TL;DR: This work proposes a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant and shows that a domain confusion metric can be used for model selection to determine the dimension of an adaptationlayer and the best position for the layer in the CNN architecture.
Abstract: Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.
2,036 citations
Authors
Showing all 5622 results
Name | H-index | Papers | Citations |
---|---|---|---|
David L. Kaplan | 177 | 1944 | 146082 |
Yang Yang | 171 | 2644 | 153049 |
Krzysztof Matyjaszewski | 169 | 1431 | 128585 |
Yi Yang | 143 | 2456 | 92268 |
Ernst J. Schaefer | 131 | 605 | 89168 |
Jose M. Ordovas | 123 | 1024 | 70978 |
Michael R. Hamblin | 117 | 899 | 59533 |
Mike Clarke | 113 | 1037 | 164328 |
Katherine L. Tucker | 106 | 683 | 39404 |
Charles T. Driscoll | 97 | 554 | 37355 |
Louise Ryan | 88 | 492 | 26849 |
Zhongping Chen | 81 | 742 | 24249 |
Kate Saenko | 80 | 287 | 39066 |
Richard A. Gross | 79 | 402 | 22225 |
Dong-Yu Kim | 70 | 342 | 20340 |