scispace - formally typeset
Search or ask a question
Topic

Bottleneck

About: Bottleneck is a research topic. Over the lifetime, 10928 publications have been published within this topic receiving 181235 citations.


Papers
More filters
Proceedings ArticleDOI

[...]

Mark Sandler1, Andrew Howard1, Menglong Zhu1, Andrey Zhmoginov1, Liang-Chieh Chen1 
18 Jun 2018
TL;DR: MobileNetV2 as mentioned in this paper is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers and intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity.
Abstract: In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.

5,263 citations

Posted Content

[...]

Mark Sandler1, Andrew Howard1, Menglong Zhu1, Andrey Zhmoginov1, Liang-Chieh Chen1 
TL;DR: A new mobile architecture, MobileNetV2, is described that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes and allows decoupling of the input/output domains from the expressiveness of the transformation.
Abstract: In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters

4,006 citations

Journal ArticleDOI

[...]

TL;DR: In a population of constant size the expected heterozygosity for a neutral locus when mutation and genetic drift are balanced is given by 4 Nv/(4Nv + 1) under the assumption that new mutations are always different from the pre-existing alleles in the population.
Abstract: In a population of constant size the expected heterozygosity for a neutral locus when mutation and genetic drift are balanced is given by 4 Nv/(4Nv + 1) under the assumption that new mutations are always different from the pre-existing alleles in the population, where N is the effective population size and v the mutation rate per locus per generation (Kimura, 1968). The size of a natural population, however, often changes drastically in the evolutionary process. In an extreme case a single inseminated female from a large population may migrate to an unoccupied geographical or ecological territory and establish a new colony, followed by rapid population growth to form a new species. This process seems to have occurred repeatedly in the evolution of Hawaiian Drosophila species (Carson, 1970; 1971) and also in the establishment of the Bogota, Colombia, population of Drosophila pseudoobscura (Prakash, 1972). When population size is suddenly reduced, the average heterozygosity per locus is expected to decline, the rate of decline depending on the effective population size, while if population size increases the aver-

3,159 citations

Journal ArticleDOI

[...]

2,675 citations

Journal ArticleDOI

[...]

TL;DR: An approximation method for solving the minimum makespan problem of job shop scheduling by sequences the machines one by one, successively, taking each time the machine identified as a bottleneck among the machines not yet sequenced.
Abstract: We describe an approximation method for solving the minimum makespan problem of job shop scheduling. It sequences the machines one by one, successively, taking each time the machine identified as a bottleneck among the machines not yet sequenced. Every time after a new machine is sequenced, all previously established sequences are locally reoptimized. Both the bottleneck identification and the local reoptimization procedures are based on repeatedly solving certain one-machine scheduling problems. Besides this straight version of the Shifting Bottleneck Procedure, we have also implemented a version that applies the procedure to the nodes of a partial search tree. Computational testing shows that our approach yields consistently better results than other procedures discussed in the literature. A high point of our computational testing occurred when the enumerative version of the Shifting Bottleneck Procedure found in a little over five minutes an optimal schedule to a notorious ten machines/ten jobs problem on which many algorithms have been run for hours without finding an optimal solution.

1,507 citations


Network Information
Related Topics (5)
Software
130.5K papers, 2M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
Optimization problem
96.4K papers, 2.1M citations
82% related
Network packet
159.7K papers, 2.2M citations
82% related
Artificial neural network
207K papers, 4.5M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023869
20221,747
2021520
2020720
2019670
2018595