scispace - formally typeset
Search or ask a question

Showing papers on "Sorting published in 2014"


Journal ArticleDOI
TL;DR: This paper provides an extensive review of various passive and active separation techniques including basic theories and experimental details, the working principles are explained in detail, and performances of the devices are discussed.
Abstract: Separation and sorting of micron-sized particles has great importance in diagnostics, chemical and biological analyses, food and chemical processing and environmental assessment. By employing the unique characteristics of microscale flow phenomena, various techniques have been established for fast and accurate separation and sorting of microparticles in a continuous manner. The advancements in microfluidics enable sorting technologies that combine the benefits of continuous operation with small-sized scale suitable for manipulation and probing of individual particles or cells. Microfluidic sorting platforms require smaller sample volume, which has several benefits in terms of reduced cost of reagents, analysis time and less invasiveness to patients for sample extraction. Additionally, smaller size of device together with lower fabrication cost allows massive parallelization, which makes high-throughput sorting possible. Both passive and active separation and sorting techniques have been reported in literature. Passive techniques utilize the interaction between particles, flow field and the channel structure and do not require external fields. On the other hand, active techniques make use of external fields in various forms but offer better performance. This paper provides an extensive review of various passive and active separation techniques including basic theories and experimental details. The working principles are explained in detail, and performances of the devices are discussed.

613 citations


Journal ArticleDOI
TL;DR: The cytosolic and transmembrane sorting machinery that function at the TGN are reviewed and molecular interactions and regulatory mechanisms that enable accurate protein sorting are described that highlight the importance of TGN sorting in physiology and disease.
Abstract: The trans-Golgi network (TGN) is an important cargo sorting station within the cell where newly synthesized proteins are packaged into distinct transport carriers that are targeted to various destinations. To maintain the fidelity of protein transport, elaborate protein sorting machinery is employed to mediate sorting of specific cargo proteins into distinct transport carriers. Protein sorting requires assembly of the cytosolic sorting machinery onto the TGN membrane and capture of cargo proteins. We review the cytosolic and transmembrane sorting machinery that function at the TGN and describe molecular interactions and regulatory mechanisms that enable accurate protein sorting. In addition, we highlight the importance of TGN sorting in physiology and disease.

191 citations


Journal ArticleDOI
TL;DR: A nonlinear integer open location-routing model for relief distribution problem considering travel time, the total cost, and reliability with split delivery is constructed and the non-dominated sorting genetic algorithm and non- dominated sorting differential evolution algorithm are proposed to solve the model.
Abstract: The effective distribution of critical relief in post disaster plays a crucial role in post-earthquake rescue operations. The location of distribution centers and vehicle routing in the available transportation network are two of the most challenging issues in emergency logistics. This paper constructs a nonlinear integer open location-routing model for relief distribution problem considering travel time, the total cost, and reliability with split delivery. It proposes the non-dominated sorting genetic algorithm and non-dominated sorting differential evolution algorithm to solve the proposed model. A case study on the Great Sichuan Earthquake in China expounds the application of the proposed models and algorithms in practice.

176 citations


Journal ArticleDOI
TL;DR: A new hybrid method based on AHP and the K-means algorithm is introduced: AHP-K-Veto, which provides a clearly higher clustering validity index than previous sorting methods, but is a full compensatory method.

127 citations


Journal ArticleDOI
TL;DR: Initial support for the construct validity of the List Sorting Working Memory Measure as a measure of working memory is provided, however, the relationship between the list Sorting Test and general executive function has yet to be determined.
Abstract: The List Sorting Working Memory Test was designed to assess working memory (WM) as part of the NIH Toolbox Cognition Battery. List Sorting is a sequencing task requiring children and adults to sort and sequence stimuli that are presented visually and auditorily. Validation data are presented for 268 participants ages 20 to 85 years. A subset of participants (N=89) was retested 7 to 21 days later. As expected, the List Sorting Test had moderately high correlations with other measures of working memory and executive functioning (convergent validity) but a low correlation with a test of receptive vocabulary (discriminant validity). Furthermore, List Sorting demonstrates expected changes over the age span and has excellent test–retest reliability. Collectively, these results provide initial support for the construct validity of the List Sorting Working Memory Measure as a measure of working memory. However, the relationship between the List Sorting Test and general executive function has yet to be determined. (JINS, 2014, 20, 1–12)

113 citations


Journal ArticleDOI
TL;DR: For example, this paper found that a large group of people are willing to sort waste at the household level even if unsorted waste would be collected at no extra cost, and that most respondents preferred to sort themselves if given the choice.

109 citations


Journal ArticleDOI
TL;DR: A DE framework with multiobjective sorting-based mutation operator that is applied to original DE algorithms, as well as several advanced DE variants, and shows that the proposed operator is an effective approach to enhance the performance of most DE algorithms studied.
Abstract: Differential evolution (DE) is a simple and powerful population-based evolutionary algorithm. The salient feature of DE lies in its mutation mechanism. Generally, the parents in the mutation operator of DE are randomly selected from the population. Hence, all vectors are equally likely to be selected as parents without selective pressure at all. Additionally, the diversity information is always ignored. In order to fully exploit the fitness and diversity information of the population, this paper presents a DE framework with multiobjective sorting-based mutation operator. In the proposed mutation operator, individuals in the current population are firstly sorted according to their fitness and diversity contribution by nondominated sorting. Then parents in the mutation operators are proportionally selected according to their rankings based on fitness and diversity, thus, the promising individuals with better fitness and diversity have more opportunity to be selected as parents. Since fitness and diversity information is simultaneously considered for parent selection, a good balance between exploration and exploitation can be achieved. The proposed operator is applied to original DE algorithms, as well as several advanced DE variants. Experimental results on 48 benchmark functions and 12 real-world application problems show that the proposed operator is an effective approach to enhance the performance of most DE algorithms studied.

105 citations


Proceedings ArticleDOI
18 Jun 2014
TL;DR: A comprehensive collection of variants of main-memory partitioning tuned for various layers of the memory hierarchy, including non-in-place variants where linear extra space is used, and a NUMA-aware partitioning that guarantees locality on multiple processors are considered.
Abstract: Analytical database systems can achieve high throughput main-memory query execution by being aware of the dynamics of highly-parallel modern hardware. Such systems rely on partitioning to cluster or divide data into smaller pieces and thus achieve better parallelism and memory locality. This paper considers a comprehensive collection of variants of main-memory partitioning tuned for various layers of the memory hierarchy. We revisit the pitfalls of in-cache partitioning, and utilizing the crucial performance factors, we introduce new variants for partitioning out-of-cache. Besides non-in-place variants where linear extra space is used, we introduce large-scale in-place variants, and propose NUMA-aware partitioning that guarantees locality on multiple processors. Also, we make range partitioning comparably fast with hash or radix, by designing a novel cache-resident index to compute ranges. All variants are combined to build three NUMA-aware sorting algorithms: a stable LSB radix-sort; an in-place MSB radix-sort using different variants across memory layers; and a comparison-sort utilizing wide-fanout range partitioning and SIMD-optimal in-cache sorting. To the best of our knowledge, all three are the fastest to date on billion-scale inputs for both dense and sparse key domains. As shown for sorting, our work can serve as a tool for building other operations (e.g., join, aggregation) by combining the most suitable variants that best meet the design goals.

99 citations


Journal ArticleDOI
TL;DR: Comparing how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes indicates that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here.
Abstract: Objective. Brain–computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

98 citations


Journal ArticleDOI
TL;DR: ELECTRE-SORT is developed, a new sorting method that is able to consider an unlimited number of criteria in order to assign machines to incomparable strategies and provides more precise and flexible maintenance strategies than the Decision Making Grid.
Abstract: The increasing demand on productivity and quality requires machines to be constantly available for production It is therefore crucial to develop an adequate maintenance programme To facilitate this, several criteria need to be considered, such as: downtime, maintenance frequency, spare parts costs, bottleneck impacts, etc In the literature, a strategy is selected for each machine with a multi-criteria decision choice method However, before making an informed decision, each strategy needs to be tested on each machine and then their performances evaluated with a multicriteria decision method This is time-consuming, inefficient and often unfeasible As machines׳ performances are usually systematically collected by industries, a much more practical approach is to assign machines to a maintenance strategy This is referred to as a sorting problem However, this problem cannot be solved by existing multi-criteria sorting methods because maintenance strategies cannot always be completely ordered: incomparable strategies exist Recently, a Decision Making Grid was proposed to allocate machines to incomparable strategies However, this technique can only be applied to problems with two criteria In this paper, we have developed ELECTRE-SORT, a new sorting method that is able to consider an unlimited number of criteria in order to assign machines to incomparable strategies A case study illustrates that ELECTRE-SORT provides more precise and flexible maintenance strategies than the Decision Making Grid

83 citations


Journal ArticleDOI
TL;DR: The results verify that the proposed NSGSA-CM is feasible and efficient for solving SEEHTS problem and introduces particle memory character and population social information in velocity update process.

01 Jan 2014
TL;DR: In this article, a large panel of worker-level data from Britain is used to demonstrate the existence of an urban wage premium for wage levels, which increases in city size, and they also provide evidence of a city size premium on wage growth, but show that this effect is driven purely by the increase in wage that occurs in the first year that a worker moves to a larger location.
Abstract: This paper is concerned with the urban wage premium and addresses two central issues about which the field has not yet reached a consensus: first, the extent to which sorting of high ability individuals into urban areas explains the urban wage premium and second, whether workers receive this wage premium immediately, or through faster wage growth over time. Using a large panel of worker-level data from Britain, we first demonstrate the existence of an urban premium for wage levels, which increases in city size. We next provide evidence of a city size premium on wage growth, but show that this effect is driven purely by the increase in wage that occurs in the first year that a worker moves to a larger location. Controlling for sorting on the basis of unobservables we find no evidence of an urban wage growth premium. Experience in cities does have some impact on wage growth, however. Specifically, we show that workers who have at some point worked in a city experience faster wage growth than those who have never worked in a city.

Patent
17 Sep 2014
TL;DR: In this article, a garbage sorting platform based on a two-dimensional code recognition technology and a using method is presented, which consists of a remote server, a system platform client side of the Internet of Things, and a terminal device.
Abstract: The invention discloses a garbage sorting platform based on a two-dimensional code recognition technology and a using method. The garbage sorting platform comprises a remote server, a system platform client side of the Internet of Things, and a terminal device. The system platform client side of the Internet of Things comprises a two-dimensional code intelligent sorting garbage can management module, a garbage bag handing-out management module, a user and community login management module, a garbage sorting integral module, a garbage sorting and recovering module and other device management modules. The terminal device comprises a two-dimensional code garbage bag handing-out machine, a two-dimensional code intelligent sorting garbage can, a garbage integral inquiry change machine and a garbage direct transporting device. By utilizing the mode that one household has one card, data corresponding of two-dimensional code IDs and the remote server is set up. By means of the garbage sorting platform and method, garbage can be sorted correctly and recovered effectively, the trouble caused by secondary sorting is reduced, the environmental sanitation department can count and process garbage sorting data conveniently, and the platform and the method have very important social significance.

Journal ArticleDOI
TL;DR: In this paper, a modification of an existing MOEA known as Non-dominated Sorting Genetic Algorithm-II (NSGA-II) has been applied on a tri-objective problem for a two echelon serial supply chain.

Journal ArticleDOI
TL;DR: This work explores the physical parameter space of acoustic bubble sorting using well-defined bubble sizes formed in a flow-focusing device, then demonstrates successful acoustic sorting of a commercial UCA.
Abstract: An ultrasound contrast agent (UCA) suspension contains encapsulated microbubbles with a wide size distribution, with radii ranging from 1 to 10 μm. Medical transducers typically operate at a single frequency, therefore only a small selection of bubbles will resonate to the driving ultrasound pulse. Thus, the sensitivity can be improved by narrowing down the size distribution. Here, we present a simple lab-on-a-chip method to sort the population of microbubbles on-chip using a traveling ultrasound wave. First, we explore the physical parameter space of acoustic bubble sorting using well-defined bubble sizes formed in a flow-focusing device, then we demonstrate successful acoustic sorting of a commercial UCA. This novel sorting strategy may lead to an overall improvement of the sensitivity of contrast ultrasound by more than 10 dB.

Journal ArticleDOI
TL;DR: A multi-objective optimization model is developed to investigate two line-cell conversion performances: the total throughput time (TTPT) and the total labor hours (TLH) and it is found that the proposed genetic algorithm is useful and can get reliable solutions in most cases.

Journal ArticleDOI
TL;DR: PCR-activated cell sorting (PACS) provides a general new technical capability that expands the application space of cell sorting by enabling sorting based on cellular information not amenable to existing approaches.
Abstract: Cell sorting is a central tool in life science research for analyzing cellular heterogeneity or enriching rare cells out of large populations. Although methods like FACS and FISH-FC can characterize and isolate cells from heterogeneous populations, they are limited by their reliance on antibodies, or the requirement to chemically fix cells. We introduce a new cell sorting technology that robustly sorts based on sequence-specific analysis of cellular nucleic acids. Our approach, PCR-activated cell sorting (PACS), uses TaqMan PCR to detect nucleic acids within single cells and trigger their sorting. With this method, we identified and sorted prostate cancer cells from a heterogeneous population by performing >132 000 simultaneous single-cell TaqMan RT-PCR reactions targeting vimentin mRNA. Following vimentin-positive droplet sorting and downstream analysis of recovered nucleic acids, we found that cancer-specific genomes and transcripts were significantly enriched. Additionally, we demonstrate that PACS can be used to sort and enrich cells via TaqMan PCR reactions targeting single-copy genomic DNA. PACS provides a general new technical capability that expands the application space of cell sorting by enabling sorting based on cellular information not amenable to existing approaches.

Book ChapterDOI
08 Jul 2014
TL;DR: This is the first secure data-oblivious shuffle that is not based on sorting, and can be used to improve previous oblivious storage solutions for network-based outsourcing of data.
Abstract: We present a simple, efficient, and secure data-oblivious randomized shuffle algorithm. This is the first secure data-oblivious shuffle that is not based on sorting. Our method can be used to improve previous oblivious storage solutions for network-based outsourcing of data.

Journal ArticleDOI
TL;DR: A non-dominated sorting genetic algorithm is developed to tackle the bi-objective model, which involves a multitude of decision variables and the Pareto-optimal strategy and feasibility-based rule are combined to obtain trade-offs between objectives.
Abstract: The improvement of emergency coping capacity is one of the most efficient measures for mitigating disaster impact. Shelter planning is an important strategy to reduce the number of casualties and injuries and facilitate disaster recovery. This study aims to address earthquake shelter location selection and the districting planning of service areas jointly. A bi-objective model is proposed to minimise the total evacuation distance and the total cost, subject to capacity and contiguity constraints. A non-dominated sorting genetic algorithm is developed to tackle the bi-objective model, which involves a multitude of decision variables. To fit the model, the chromosome structure, initialisation process and genetic operators in the algorithm are specifically designed to maintain the contiguity of the service area. And a hybrid strategy of bidirectional multi-point crossover and bidirectional single-point crossover helps promote the diversity of the solutions and accelerate the convergence. Moreover, the Pareto...

Journal ArticleDOI
TL;DR: In this article, a non-dominated sorting genetic algorithm II (NSGA-II) is proposed to solve the operational decision-making problem incorporating both economic and environmental performance for single machine systems with deterministic product arrival time and the First Come First Served processing rule.

Journal ArticleDOI
TL;DR: A modified gradient ascent clustering (GAC) algorithm is used to do spike sorting based on a divide and conquer approach that can sort spikes with minimal user input in times comparable to real time for recordings lasting up to 45 min.
Abstract: In order to determine patterns of neural activity, spike signals recorded by extracellular electrodes have to be clustered (sorted) with the aim of ensuring that each cluster represents all the spikes generated by an individual neuron. Many methods for spike sorting have been proposed but few are easily applicable to recordings from polytrodes which may have 16 or more recording sites. As with tetrodes, these are spaced sufficiently closely that signals from single neurons will usually be recorded on several adjacent sites. Although this offers a better chance of distinguishing neurons with similarly shaped spikes, sorting is difficult in such cases because of the high dimensionality of the space in which the signals must be classified. This report details a method for spike sorting based on a divide and conquer approach. Clusters are initially formed by assigning each event to the channel on which it is largest. Each channel-based cluster is then sub-divided into as many distinct clusters as possible. These are then recombined on the basis of pairwise tests into a final set of clusters. Pairwise tests are also performed to establish how distinct each cluster is from the others. A modified gradient ascent clustering (GAC) algorithm is used to do the clustering. The method can sort spikes with minimal user input in times comparable to real time for recordings lasting up to 45 minutes. Our results illustrate some of the difficulties inherent in spike sorting, including changes in spike shape over time. We show that some physiologically distinct units may have very similar spike shapes. We show that RMS measures of spike shape similarity are not sensitive enough to discriminate clusters that can otherwise be separated by principal components analysis. Hence spike sorting based on least-squares matching to templates may be unreliable. Our methods should be applicable to tetrodes and scaleable to larger multi-electrode arrays (MEAs).

Journal ArticleDOI
Heng Li1
TL;DR: A new method to incrementally construct the FM-index for both short and long sequence reads, up to the size of a genome, is presented, which is the first algorithm that can build the index while implicitly sorting the sequences in the reverse lexicographical order without a separate sorting step.
Abstract: Summary: We present a new method to incrementally construct the FM-index for both short and long sequence reads, up to the size of a genome. It is the first algorithm that can build the index while implicitly sorting the sequences in the reverse (complement) lexicographical order without a separate sorting step. The implementation is among the fastest for indexing short reads and the only one that practically works for reads of averaged kilobases in length. Availability and implementation: https://github.com/lh3/ropebwt2 Contact: gro.etutitsnidaorb@ilgneh

Journal ArticleDOI
TL;DR: The design of a new genetic algorithm (GA) is introduced to detect the locations of license plate (LP) symbols with encouraging results with 98.4% overall accuracy for two different datasets having variability in orientation, scaling, plate location, illumination, and complex background.
Abstract: In this research, the design of a new genetic algorithm (GA) is introduced to detect the locations of license plate (LP) symbols. An adaptive threshold method is applied to overcome the dynamic changes of illumination conditions when converting the image into binary. Connected component analysis technique (CCAT) is used to detect candidate objects inside the unknown image. A scale-invariant geometric relationship matrix is introduced to model the layout of symbols in any LP that simplifies system adaptability when applied in different countries. Moreover, two new crossover operators, based on sorting, are introduced, which greatly improve the convergence speed of the system. Most of the CCAT problems, such as touching or broken bodies, are minimized by modifying the GA to perform partial match until reaching an acceptable fitness value. The system is implemented using MATLAB and various image samples are experimented with to verify the distinction of the proposed system. Encouraging results with 98.4% overall accuracy are reported for two different datasets having variability in orientation, scaling, plate location, illumination, and complex background. Examples of distorted plate images are successfully detected due to the independency on the shape, color, or location of the plate.

Journal ArticleDOI
TL;DR: This article used the vignettes method to estimate individuals' willingness to pay for fringe benefits and job amenities, and found negative wage-fringe trade-offs, considerable heterogeneity in willingness-to-pay for fringe benefit, and signs of sorting.
Abstract: The two key predictions of hedonic wage theory are that there is a trade-off between wages and nonmonetary rewards and that the latter can be used as a sorting device by firms to attract and retain the kind of employees they desire. We use the vignettes method to estimate individuals’ willingness-to-pay for fringe benefits and job amenities. We find negative wage-fringe trade-offs, considerable heterogeneity in willingness-to-pay for fringe benefits, and signs of sorting.

Journal ArticleDOI
Mousumi Basu1
15 Dec 2014-Energy
TL;DR: In this article, the authors presented a non-nominated sorting genetic algorithm-II for solving fuel constrained economic emission dispatch problem of thermal generating units, which is a multiobjective optimization problem which includes the standard load constraints as well as the fuel constraints.

Journal ArticleDOI
TL;DR: Results obtained show that the proposed DDMO significantly outperforms the NSGA-II that optimizes the entire network as a whole in terms of efficiently finding good quality optimal fronts.
Abstract: This paper proposes an efficient decomposition and dual-stage multi-objective optimization (DDMO) method for designing water distribution systems with multiple supply sources (WDS-MSSs). Three phases are involved in the proposed DDMO approach. In Phase 1, an optimal source partitioning cut-set is identified for a WDS-MSS, allowing the entire WDS-MSS to be decomposed into sub-networks. Then in Phase 2 a non-dominated sorting genetic algorithm (NSGA-II) is employed to optimize the sub-networks separately, thereby producing an optimal front for each sub-network. Finally in Phase 3, another NSGA-II implementation is used to drive the combined sub-network front (an approximate optimal front) towards the Pareto front for the original complete WDS-MSS. Four WDS-MSSs are used to demonstrate the effectiveness of the proposed approach. Results obtained show that the proposed DDMO significantly outperforms the NSGA-II that optimizes the entire network as a whole in terms of efficiently finding good quality optimal fronts. Incorporate graph decomposition for multi-objective optimization of water networks.Develop a two-stage algorithm for multi-objective optimization of water networks.Use real-world networks up to 525 decision variables to verify the proposed method.Proposed method shows great efficiency in terms of finding Parent fronts.Provide an efficient decision-making tool for the water network optimization.


Journal ArticleDOI
TL;DR: A method that results from the combination of NSGA-N, MEAN and a new heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints and generates similar quality SR plans in distribution systems of significantly different sizes.

Journal ArticleDOI
TL;DR: In this paper, a non-nondominated sorting multi-objective gravitational search algorithm (NSMOGSA) is proposed for OPF problems. And the proposed method is employed for optimal adjustments of the power system control variables which involve continuous variables of the OPF problem.

Book ChapterDOI
23 Apr 2014