scispace - formally typeset
Patent

Determining biomarkers from histopathology slide images

04 Mar 2021-
Abstract: A generalizable and interpretable deep learning model for predicting biomarker status and biomarker metrics from histopathology slide images is provided.

... read more

Topics: Biomarker (medicine) (59%)
Citations
  More

5 results found


Patent
10 Mar 2020-
Abstract: Two-pass capture of a macro image. In an embodiment, a scanning apparatus comprises a stage, a high-resolution camera, and a lens that provides a field of view, substantially equal in width to a slide width, to the high-resolution camera. The apparatus also comprises a first illumination system for transmission-mode illumination, and a second illumination system for reflection-mode illumination. Processor(s) move the stage in a first direction to capture a first macro image of a specimen during a single pass while the field of view is illuminated by the first illumination system, and move the stage in a second direction to capture a second macro image of the specimen during a single pass while the field of view is illuminated by the second illumination system. The processor(s) identify artifacts in the second macro image, and, based on those artifacts, correct the first macro image to generate a modified first macro image.

... read more

Topics: Lens (optics) (55%)

2 Citations


Patent
09 Jan 2020-
Abstract: Systems and methods for performing quantitative histopathology analysis for determining tissue potency are disclosed. According to some embodiments, a method training a tissue classifier is provided. According to the method, training the tissue classifier includes generating feature fingerprints of detected nuclei within slide images in a control library and clustering the slide images based on their corresponding feature fingerprints. According to some embodiments, a method for utilizing the trained tissue classifier is provided. According to the method, the trained tissue classifier determines whether tissue in an unknown slide image corresponds to slide images clustered during the training of the tissue classifier.

... read more

Topics: Feature (computer vision) (57.99%), Classifier (UML) (54%)

1 Citations


Patent
19 Dec 2019-
Abstract: A system trains and applies a machine learning model to label maps of a region. Various data modalities are combined as inputs for multiple data tiles used to characterize a region for a geographical map. Each data modality reflects sensor data captured in different ways. Some data modalities include aerial imagery, point cloud data, and location trace data. The different data modalities are captured independently and then aggregated using machine learning models to determine map labeling information about tiles in the region. Data is ingested by the system and corresponding tiles are identified. A tile is represented by a feature vector of different data types related to the various data modalities, and values from the ingested data are added to the feature vector for the tile. Models can be trained to predict characteristics of a region using these various types of input.

... read more


Patent
25 Dec 2020-
Abstract: The invention discloses a fault identification method based on grad-CAM attention guidance. The method comprises the following steps that S1, an attention graph of a convolutional neural network is obtained through grad-CAM; S2, a cross entropy loss function of the attention graph and an attention graph marked by geoscience experts is added into a target function of the convolutional neural network to obtain a new convolutional neural network target function; and S3, a fault identification model is trained by using the target function obtained in the step S2. On the basis of a typical deep learning framework, an attention guiding mechanism is introduced, attention of the network to faults and neighborhood pixels of the faults can be effectively increased, effective guiding of fault classification judgment on the neural network can be achieved, the situation that a fault recognition result is broken can be effectively improved, and an identification result with better continuity is obtained.

... read more

Topics: Deep learning (59%), Convolutional neural network (56.99%), Graph (abstract data type) (54%) ... show more

Patent
17 Jun 2021-
Abstract: A method of configuring images for display in a user interface can include accessing an image and information indicating locations of a plurality of features of interest within the image. The method can include determining a first tile arrangement indicating a first quantity of tiles associated with a first zoom level such that each of the first tiles including a portion of the image at a first downsampling such that the first tiles collectively represent the image. The method can include determining a second tile arrangement indicating a second quantity of tiles associated with a second zoom level. The method can include determining a zoom level associated with display of a portion of the image and one or more tiles associated with the portion of the image at the determined zoom level. The method can include generating the determined one or more tiles and displaying them in a browser.

... read more

Topics: Zoom (56%)

References
  More

155 results found


Open accessProceedings ArticleDOI: 10.1109/CVPR.2016.90
Kaiming He1, Xiangyu Zhang1, Shaoqing Ren1, Jian Sun1Institutions (1)
27 Jun 2016-
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

... read more

Topics: Deep learning (53%), Residual (53%), Convolutional neural network (53%) ... show more

93,356 Citations


Journal ArticleDOI: 10.1023/B:VISI.0000029664.99615.94
David G. Lowe1Institutions (1)
Abstract: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.

... read more

Topics: 3D single-object recognition (64%), Haar-like features (63%), Feature (computer vision) (57.99%) ... show more

42,225 Citations


Open accessJournal ArticleDOI: 10.3322/CAAC.21492
Abstract: This article provides a status report on the global burden of cancer worldwide using the GLOBOCAN 2018 estimates of cancer incidence and mortality produced by the International Agency for Research on Cancer, with a focus on geographic variability across 20 world regions There will be an estimated 181 million new cancer cases (170 million excluding nonmelanoma skin cancer) and 96 million cancer deaths (95 million excluding nonmelanoma skin cancer) in 2018 In both sexes combined, lung cancer is the most commonly diagnosed cancer (116% of the total cases) and the leading cause of cancer death (184% of the total cancer deaths), closely followed by female breast cancer (116%), prostate cancer (71%), and colorectal cancer (61%) for incidence and colorectal cancer (92%), stomach cancer (82%), and liver cancer (82%) for mortality Lung cancer is the most frequent cancer and the leading cause of cancer death among males, followed by prostate and colorectal cancer (for incidence) and liver and stomach cancer (for mortality) Among females, breast cancer is the most commonly diagnosed cancer and the leading cause of cancer death, followed by colorectal and lung cancer (for incidence), and vice versa (for mortality); cervical cancer ranks fourth for both incidence and mortality The most frequently diagnosed cancer and the leading cause of cancer death, however, substantially vary across countries and within each country depending on the degree of economic development and associated social and life style factors It is noteworthy that high-quality cancer registry data, the basis for planning and implementing evidence-based cancer control programs, are not available in most low- and middle-income countries The Global Initiative for Cancer Registry Development is an international partnership that supports better estimation, as well as the collection and use of local data, to prioritize and evaluate national cancer control efforts CA: A Cancer Journal for Clinicians 2018;0:1-31 © 2018 American Cancer Society

... read more

Topics: Cancer registry (78%), Cancer (72%), Breast cancer (63%) ... show more

39,828 Citations


Open accessProceedings Article
Karen Simonyan1, Andrew Zisserman1Institutions (1)
04 Sep 2014-
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

... read more

38,283 Citations


Open accessProceedings ArticleDOI: 10.1109/CVPR.2015.7298594
Christian Szegedy1, Wei Liu2, Yangqing Jia1, Pierre Sermanet1  +5 moreInstitutions (3)
07 Jun 2015-
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

... read more

29,453 Citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20211
20203
20191
Network Information
Related Papers (5)
Method and system for use of biomarkers in diagnostic imaging09 Jan 2004

Kevin J. Parker, Jose Tamez-Pena +2 more

67% related
Experiments in molecular subtype recognition based on histopathology images13 Apr 2016

Eva Budinská, Fred T. Bosman +1 more

64% related
Method For Integrated Pathology Diagnosis And Digital Biomarker Pattern Analysis17 Apr 2013

Patrick M. McDonough, Jeffrey H. Price

63% related
Method and system for assessment of biomarkers by measurement of response to stimulus08 Sep 2003

Jose Tamez-Pena, S. Totterman +1 more

62% related