scispace - formally typeset
Search or ask a question
Author

Chris Brown

Other affiliations: IBM, Emory University, Duke University  ...read more
Bio: Chris Brown is an academic researcher from National Health and Medical Research Council. The author has contributed to research in topics: International relations & Political philosophy. The author has an hindex of 74, co-authored 642 publications receiving 28663 citations. Previous affiliations of Chris Brown include IBM & Emory University.


Papers
More filters
Book
01 Jan 1982

5,834 citations

Journal ArticleDOI
TL;DR: A framework to handle semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels, is presented and appears to generalize to other classification problems of the same nature.

2,161 citations

Journal ArticleDOI
TL;DR: The role played by the las and rhl QS systems during the early stages of static biofilm formation when cells are adhering to a surface and forming microcolonies is investigated and its implications for the design of biofilm prevention and eradication strategies are investigated.
Abstract: Acylated homoserine lactone molecules are used by a number of gram-negative bacteria to regulate cell density-dependent gene expression by a mechanism known as quorum sensing (QS). In Pseudomonas aeruginosa, QS or cell-to-cell signaling controls expression of a number of virulence factors, as well as biofilm differentiation. In this study, we investigated the role played by the las and rhl QS systems during the early stages of static biofilm formation when cells are adhering to a surface and forming microcolonies. These studies revealed a marked difference in biofilm formation between the PAO1 parent and the QS mutants when glucose, but not citrate, was used as the sole carbon source. To further elucidate the contribution of lasI and rhlI to biofilm maturation, we utilized fusions to unstable green fluorescent protein in concert with confocal microscopy to perform real-time temporal and spatial studies of these genes in a flowing environment. During the course of 8-day biofilm development, lasI expression was found to progressively decrease over time. Conversely, rhlI expression remained steady throughout biofilm development but occurred in a lower percentage of cells. Spatial analysis revealed that lasI and rhlI were maximally expressed in cells located at the substratum and that expression decreased with increasing biofilm height. Because QS was shown previously to be involved in biofilm differentiation, these findings have important implications for the design of biofilm prevention and eradication strategies.

539 citations

Journal ArticleDOI
TL;DR: This trial is the largest in recurrent ovarian cancer and has demonstrated superiority in PFS and better therapeutic index of CD over standard CP and better quality of life, and overall survival.
Abstract: Purpose This randomized, multicenter, phase III noninferiority trial was designed to test the efficacy and safety of the combination of pegylated liposomal doxorubicin (PLD) with carboplatin (CD) compared with standard carboplatin and paclitaxel (CP) in patients with platinum-sensitive relapsed/recurrent ovarian cancer (ROC). Patients and Methods Patients with histologically proven ovarian cancer with recurrence more than 6 months after first- or second-line platinum and taxane-based therapies were randomly assigned by stratified blocks to CD (carboplatin area under the curve [AUC] 5 plus PLD 30 mg/m 2 ) every 4 weeks or CP (carboplatin AUC 5 plus paclitaxel 175 mg/m 2 ) every 3 weeks for at least 6 cycles. Primary end point was progressionfree survival (PFS); secondary end points were toxicity, quality of life, and overall survival. Results Overall 976 patients were recruited. With median follow-up of 22 months, PFS for the CD arm was statistically superior to the CP arm (hazard ratio, 0.821; 95% CI, 0.72 to 0.94; P .005); median PFS was 11.3 versus 9.4 months, respectively. Although overall survival data are immature for final analysis, we report here a total of 334 deaths. Overall severe nonhematologic toxicity (36.8% v 28.4%; P .01) leading to early discontinuation (15% v 6%; P .001) occurred more frequently in the CP arm. More frequent grade 2 or greater alopecia (83.6% v 7%), hypersensitivity reactions (18.8% v 5.6%), and sensory neuropathy (26.9% v 4.9%) were observed in the CP arm; more hand-foot syndrome (grade 2 to 3, 12.0% v 2.2%), nausea (35.2% v 24.2%), and mucositis (grade 2-3, 13.9% v 7%) in the CD arm.

497 citations


Cited by
More filters
Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Abstract: We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed "Fisherface" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.

11,674 citations

Journal ArticleDOI
TL;DR: Biesinger et al. as mentioned in this paper proposed a more consistent and effective approach to curve fitting based on a combination of standard spectra from quality reference samples, a survey of appropriate literature databases and/or a compilation of literature references and specific literature references where fitting procedures are available.

7,498 citations

Journal ArticleDOI
TL;DR: This paper has designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms.
Abstract: Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.

7,458 citations