scispace - formally typeset
Search or ask a question
Author

Lisa M. Brown

Bio: Lisa M. Brown is an academic researcher from University of Washington. The author has contributed to research in topics: Video tracking & Object (computer science). The author has an hindex of 45, co-authored 144 publications receiving 8042 citations. Previous affiliations of Lisa M. Brown include Albert Einstein College of Medicine & IBM.


Papers
More filters
Journal ArticleDOI
TL;DR: The current status of vesiculogenesis research in thick-walled microorganisms is described and the cargo and functions associated with EVs in these species are discussed.
Abstract: Extracellular vesicles (EVs) are produced by all domains of life In Gram-negative bacteria, EVs are produced by the pinching off of the outer membrane; however, how EVs escape the thick cell walls of Gram-positive bacteria, mycobacteria and fungi is still unknown Nonetheless, EVs have been described in a variety of cell-walled organisms, including Staphylococcus aureus, Mycobacterium tuberculosis and Cryptococcus neoformans These EVs contain varied cargo, including nucleic acids, toxins, lipoproteins and enzymes, and have important roles in microbial physiology and pathogenesis In this Review, we describe the current status of vesiculogenesis research in thick-walled microorganisms and discuss the cargo and functions associated with EVs in these species

834 citations

Journal ArticleDOI
TL;DR: The Moments in Time dataset, a large-scale human-annotated collection of one million short videos corresponding to dynamic events unfolding within three seconds, can serve as a new challenge to develop models that scale to the level of complexity and abstract reasoning that a human processes on a daily basis.
Abstract: We present the Moments in Time Dataset, a large-scale human-annotated collection of one million short videos corresponding to dynamic events unfolding within three seconds. Modeling the spatial-audio-temporal dynamics even for actions occurring in 3 second videos poses many challenges: meaningful events do not include only people, but also objects, animals, and natural phenomena; visual and auditory events can be symmetrical in time (“opening” is “closing” in reverse), and either transient or sustained. We describe the annotation process of our dataset (each video is tagged with one action or activity label among 339 different classes), analyze its scale and diversity in comparison to other large-scale video datasets for action recognition, and report results of several baseline models addressing separately, and jointly, three modalities: spatial, temporal and auditory. The Moments in Time dataset, designed to have a large coverage and diversity of events in both visual and auditory modalities, can serve as a new challenge to develop models that scale to the level of complexity and abstract reasoning that a human processes on a daily basis.

416 citations

Journal ArticleDOI
TL;DR: In this paper, the defect structures, including threading dislocations, partial dislocation bounding stacking faults and inversion domains, were investigated by transmission electron microscopy for GaN/Al2O3 epilayers grown by metalorganic chemical vapor deposition using a two-step process.
Abstract: Defect structures were investigated by transmission electron microscopy for GaN/Al2O3 (0001) epilayers grown by metal‐organic chemical vapor deposition using a two‐step process. The defect structures, including threading dislocations, partial dislocation bounding stacking faults, and inversion domains, were analyzed by diffraction contrast, high‐resolution imaging, and convergent beam diffraction. GaN film growth was initiated at 600 °C with a nominal 20 nm nucleation layer. This was followed by high‐temperature growth at 1080 °C. The near‐interfacial region of the films consists of a mixture of cubic and hexagonal GaN, which is characterized by a high density of stacking faults bounded by Shockley and Frank partial dislocations. The near‐interfacial region shows a high density of inversion domains. Above ∼0.5 μm thickness, the film consists of isolated threading dislocations of either pure edge, mixed, or pure screw character with a total density of ∼7×108 cm−2. The threading dislocation reduction in the...

402 citations

Journal ArticleDOI
Andrew W. Senior1, Arun Hampapur1, Yingli Tian1, Lisa M. Brown1, Sharath Pankanti1, Ruud M. Bolle1 
TL;DR: This paper presents a tracking system which successfully deals with complex real world interactions, as demonstrated on the PETS 2001 dataset.

389 citations

Journal ArticleDOI
TL;DR: In this article, a model for the interaction between a matrix slip dislocation and a second phase particle of lower shear modulus than the matrix is presented, where the minimum included angle reached by the arms of a dislocation while cutting the precipitate can be calculated as a function of the energy of the dislocation on either side of the precipitates/matrix interface.

373 citations


Cited by
More filters
Book ChapterDOI
TL;DR: In this article, a new representation learning approach for domain adaptation is proposed, in which data at training and test time come from similar but different distributions, and features that cannot discriminate between the training (source) and test (target) domains are used to promote the emergence of features that are discriminative for the main learning task on the source domain.
Abstract: We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.

4,862 citations

Posted Content
TL;DR: In this paper, a gradient reversal layer is proposed to promote the emergence of deep features that are discriminative for the main learning task on the source domain and invariant with respect to the shift between the domains.
Abstract: Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.

3,222 citations

01 Jan 2006

3,012 citations

Proceedings Article
06 Jul 2015
TL;DR: The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.
Abstract: Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.

2,889 citations

Journal ArticleDOI
TL;DR: This paper empirically evaluates facial representation based on statistical local features, Local Binary Patterns, for person-independent facial expression recognition, and observes that LBP features perform stably and robustly over a useful range of low resolutions of face images, and yield promising performance in compressed low-resolution video sequences captured in real-world environments.

2,098 citations