scispace - formally typeset
Search or ask a question

Showing papers on "Sorting published in 2015"


Journal ArticleDOI
TL;DR: This review examines the breadth of microfluidic cell sorting technologies, while focusing on those that offer the greatest potential for translation into clinical and industrial practice and that offer multiple, useful functions.
Abstract: Accurate and high throughput cell sorting is a critical enabling technology in molecular and cellular biology, biotechnology, and medicine While conventional methods can provide high efficiency sorting in short timescales, advances in microfluidics have enabled the realization of miniaturized devices offering similar capabilities that exploit a variety of physical principles We classify these technologies as either active or passive Active systems generally use external fields (eg, acoustic, electric, magnetic, and optical) to impose forces to displace cells for sorting, whereas passive systems use inertial forces, filters, and adhesion mechanisms to purify cell populations Cell sorting on microchips provides numerous advantages over conventional methods by reducing the size of necessary equipment, eliminating potentially biohazardous aerosols, and simplifying the complex protocols commonly associated with cell sorting Additionally, microchip devices are well suited for parallelization, enabling complete lab-on-a-chip devices for cellular isolation, analysis, and experimental processing In this review, we examine the breadth of microfluidic cell sorting technologies, while focusing on those that offer the greatest potential for translation into clinical and industrial practice and that offer multiple, useful functions We organize these sorting technologies by the type of cell preparation required (ie, fluorescent label-based sorting, bead-based sorting, and label-free sorting) as well as by the physical principles underlying each sorting mechanism

845 citations


Journal ArticleDOI
TL;DR: In this paper, a novel, computationally efficient approach to nondominated sorting is proposed, termed efficient nondominated sort (ENS), where a solution to be assigned to a front needs to be compared only with those that have already been assigned toA front, thereby avoiding many unnecessary dominance comparisons.
Abstract: Evolutionary algorithms have been shown to be powerful for solving multiobjective optimization problems, in which nondominated sorting is a widely adopted technique in selection. This technique, however, can be computationally expensive, especially when the number of individuals in the population becomes large. This is mainly because in most existing nondominated sorting algorithms, a solution needs to be compared with all other solutions before it can be assigned to a front. In this paper we propose a novel, computationally efficient approach to nondominated sorting, termed efficient nondominated sort (ENS). In ENS, a solution to be assigned to a front needs to be compared only with those that have already been assigned to a front, thereby avoiding many unnecessary dominance comparisons. Based on this new approach, two nondominated sorting algorithms have been suggested. Both theoretical analysis and empirical results show that the ENS-based sorting algorithms are computationally more efficient than the state-of-the-art nondominated sorting methods.

378 citations


Book
Richard Cole1
06 Sep 2015
TL;DR: This paper provides a general method that trims a factor of O(log n) time for many applications of this technique.
Abstract: Megiddo introduced a technique for using a parallel algorithm for one problem to construct an efficient serial algorithm for a second problem. We give a general method that trims a factor o f 0(logn) time (or more) for many applications of this technique.

301 citations


Journal ArticleDOI
TL;DR: This paper proposes a hybrid multiobjective evolutionary algorithm integrating these two different strategies for combinatorial optimization problems with two or three objectives that outperforms other approaches.
Abstract: Domination-based sorting and decomposition are two basic strategies used in multiobjective evolutionary optimization. This paper proposes a hybrid multiobjective evolutionary algorithm integrating these two different strategies for combinatorial optimization problems with two or three objectives. The proposed algorithm works with an internal (working) population and an external archive. It uses a decomposition-based strategy for evolving its working population and uses a domination-based sorting for maintaining the external archive. Information extracted from the external archive is used to decide which search regions should be searched at each generation. In such a way, the domination-based sorting and the decomposition strategy can complement each other. In our experimental studies, the proposed algorithm is compared with a domination-based approach, a decomposition-based one, and one of its enhanced variants on two well-known multiobjective combinatorial optimization problems. Experimental results show that our proposed algorithm outperforms other approaches. The effects of the external archive in the proposed algorithm are also investigated and discussed.

182 citations


Journal ArticleDOI
TL;DR: In this article, an improved non-nominated sorting genetic algorithm-II (INSGA-II) was proposed for optimal planning of multiple distributed generation (DG) units in distribution systems.
Abstract: An improved nondominated sorting genetic algorithm–II (INSGA-II) has been proposed for optimal planning of multiple distributed generation (DG) units in this paper. First, multiobjective functions that take minimum line loss, minimum voltage deviation, and maximal voltage stability margin into consideration have been formed. Then, using the proposed INSGA-II algorithm to solve the multiobjective planning problem has been described in detail. The improved sorting strategy and the novel truncation strategy based on hierarchical agglomerative clustering are utilized to keep the diversity of population. In order to strengthen the global optimal searching capability, the mutation and recombination strategies in differential evolution are introduced to replace the original one. In addition, a tradeoff method based on fuzzy set theory is used to obtain the best compromise solution from the Pareto-optimal set. Finally, several experiments have been made on the IEEE 33-bus test case and multiple actual test cases with the consideration of multiple DG units. The feasibility and effectiveness of the proposed algorithm for optimal placement and sizing of DG in distribution systems have been proved.

175 citations


Journal ArticleDOI
TL;DR: A new geometry is presented that replaces the hard divider separating the outlets with a gapped divider, allowing sorting over ten times faster.
Abstract: Fluorescence-activated droplet sorting is an important tool for droplet microfluidic workflows, but published approaches are unable to surpass throughputs of a few kilohertz. We present a new geometry that replaces the hard divider separating the outlets with a gapped divider, allowing sorting over ten times faster.

168 citations


Journal ArticleDOI
TL;DR: The experimental results indicate that the proposed NSLS is able to find a better spread of solutions and a better convergence to the true Pareto-optimal front compared to the other four algorithms.
Abstract: In this paper, a new multiobjective optimization framework based on nondominated sorting and local search (NSLS) is introduced. The NSLS is based on iterations. At each iteration, given a population ${P}$ , a simple local search method is used to get a better population $P{'}$ , and then the nondominated sorting is adopted on $P \cup P{'}$ to obtain a new population for the next iteration. Furthermore, the farthest-candidate approach is combined with the nondominated sorting to choose the new population for improving the diversity. Additionally, another version of NSLS (NSLS-C) is used for comparison, which replaces the farthest-candidate method with the crowded comparison mechanism presented in the nondominated sorting genetic algorithm II (NSGA-II). The proposed method (NSLS) is compared with NSLS-C and the other three classic algorithms: NSGA-II, MOEA/D-DE, and MODEA on a set of seventeen bi-objective and three tri-objective test problems. The experimental results indicate that the proposed NSLS is able to find a better spread of solutions and a better convergence to the true Pareto-optimal front compared to the other four algorithms. Furthermore, the sensitivity of NSLS is also experimentally investigated in this paper.

140 citations


Journal ArticleDOI
TL;DR: A high-throughput cell sorting method based on standing surface acoustic waves (SSAWs) based on a pair of focused interdigital transducers (FIDTs) to generate SSAW with high resolution and high energy efficiency is reported.
Abstract: Acoustic-based fluorescence activated cell sorters (FACS) have drawn increased attention in recent years due to their versatility, high biocompatibility, high controllability, and simple design. However, the sorting throughput for existing acoustic cell sorters is far from optimum for practical applications. Here we report a high-throughput cell sorting method based on standing surface acoustic waves (SSAWs). We utilized a pair of focused interdigital transducers (FIDTs) to generate SSAW with high resolution and high energy efficiency. As a result, the sorting throughput is improved significantly from conventional acoustic-based cell sorting methods. We demonstrated the successful sorting of 10 μm polystyrene particles with a minimum actuation time of 72 μs, which translates to a potential sorting rate of more than 13 800 events per second. Without using a cell-detection unit, we were able to demonstrate an actual sorting throughput of 3300 events per second. Our sorting method can be conveniently integrated with upstream detection units, and it represents an important development towards a functional acoustic-based FACS system.

130 citations


Journal ArticleDOI
TL;DR: The strongest motivation for central sorting of residual MSW is found for areas where source separation and separate collection is difficult, such as urban agglomerations, and can in such areas contribute to increasing recycling rates, either complementary to- or as a substitute for source separation of certain materials.

127 citations


Journal ArticleDOI
TL;DR: A novel multi-criteria decision making (MCDM) method named FlowSort-GDSS is proposed to sort the failure modes into priority classes by involving multiple decision-makers to make the decision more robust.
Abstract: Most of real world decision making activities involve multiple decision-makers.FlowSort-GDSS (group decision support system) is proposed to solve sorting problems with multiple decision-makers.FlowSort-GDSS is applied to the failure mode and effects analysis (FMEA).A blow moulding process is analysed by the FMEA approach combined with FlowSort-GDSS. Failure mode and effects analysis (FMEA) is a well-known approach for correlating the failure modes of a system to their effects, with the objective of assessing their criticality. The criticality of a failure mode is traditionally established by its risk priority number (RPN), which is the product of the scores assigned to the three risk factors, which are likeness of occurrence, the chance of being undetected and the severity of the effects. Taking a simple "unweighted" product has major shortcomings. One of them is to provide just a number, which does not sort failures modes into priority classes. Moreover, to make the decision more robust, the FMEA is better tackled by multiple decision-makers. Unfortunately, the literature lacks group decision support systems (GDSS) for sorting failures in the field of the FMEA.In this paper, a novel multi-criteria decision making (MCDM) method named FlowSort-GDSS is proposed to sort the failure modes into priority classes by involving multiple decision-makers. The essence of this method lies in the pair-wise comparison between the failure modes and the reference profiles established by the decision-makers on the risk factors. Finally a case study is presented to illustrate the advantages of this new robust method in sorting failures.

116 citations


Journal ArticleDOI
TL;DR: Providing a property close collection system to collect more waste fractions as well as finding new communication channels for information about sorting can be used as tools to increase the source separation ratio.

Book ChapterDOI
03 Sep 2015
TL;DR: In this article, a CUDA GPU library is proposed to accelerate evaluations with homomorphic schemes defined over polynomial rings enabled with a number of optimizations including algebraic techniques for efficient evaluation, memory minimization techniques, memory and thread scheduling and low level CUDA hand-tuned assembly optimizations.
Abstract: We introduce a CUDA GPU library to accelerate evaluations with homomorphic schemes defined over polynomial rings enabled with a number of optimizations including algebraic techniques for efficient evaluation, memory minimization techniques, memory and thread scheduling and low level CUDA hand-tuned assembly optimizations to take full advantage of the mass parallelism and high memory bandwidth GPUs offer. The arithmetic functions constructed to handle very large polynomial operands using number-theoretic transform (NTT) and Chinese remainder theorem (CRT) based methods are then extended to implement the primitives of the leveled homomorphic encryption scheme proposed by Lopez-Alt, Tromer and Vaikuntanathan. To compare the performance of the proposed CUDA library we implemented two applications: the Prince block cipher and homomorphic sorting algorithms on two GPU platforms in single GPU and multiple GPU configurations. We observed a speedup of 25 times and 51 times over the best previous GPU implementation for Prince with single and triple GPUs, respectively. Similarly for homomorphic sorting we obtained 12–41 times speedup depending on the number and size of the sorted elements.

Journal ArticleDOI
TL;DR: Results imply that when thresholding is used instead of spike sorting, only a small amount of performance may be lost in BMI decoder applications, which may significantly extend the lifetime of a device.
Abstract: Objective. For intracortical brain–machine interfaces (BMIs), action potential voltage waveforms are often sorted to separate out individual neurons. If these neurons contain independent tuning information, this process could increase BMI performance. However, the sorting of action potentials (‘spikes’) requires high sampling rates and is computationally expensive. To explicitly define the difference between spike sorting and alternative methods, we quantified BMI decoder performance when using threshold-crossing events versus sorted action potentials. Approach. We used data sets from 58 experimental sessions from two rhesus macaques implanted with Utah arrays. Data were recorded while the animals performed a center-out reaching task with seven different angles. For spike sorting, neural signals were sorted into individual units by using a mixture of Gaussians to cluster the first four principal components of the waveforms. For thresholding events, spikes that simply crossed a set threshold were retained. We decoded the data offline using both a Naive Bayes classifier for reaching direction and a linear regression to evaluate hand position. Main results. We found the highest performance for thresholding when placing a threshold between −3 and −4.5 × Vrms. Spike sorted data outperformed thresholded data for one animal but not the other. The mean Naive Bayes classification accuracy for sorted data was 88.5% and changed by 5% on average when data were thresholded. The mean correlation coefficient for sorted data was 0.92, and changed by 0.015 on average when thresholded. Significance. For prosthetics applications, these results imply that when thresholding is used instead of spike sorting, only a small amount of performance may be lost. The utilization of threshold-crossing events may significantly extend the lifetime of a device because these events are often still detectable once single neurons are no longer isolated.

Journal ArticleDOI
TL;DR: The obtained result shows the potential of the proposed method in achieving the optimal solution of single and multi objective optimal power flow problems.

Patent
Fen Lin1
06 May 2015
TL;DR: In this paper, an automatic question-answering system consisting of a user input module, a question analysis module, semantic searching and sorting module and an output module is presented.
Abstract: The invention discloses an automatic question-answering system and method. The automatic question-answering system comprises a user input module, a question analysis module, a semantic searching and sorting module and an output module. The user input module is used for receiving question information input by users asking questions. The question analysis module is used for analyzing the question information input by the users and determining key word sets, question types and user intention types. The semantic searching and sorting module is used for searching question-answering banks and category trees to obtain matched alternative answers according to the key word sets, the question types and the user intention types, determining searching correlation and sorting the alternative answers according to the searching correlation. The output module is used for outputting the alternative answers ranking at the top. By utilizing the automatic question-answering system and method, the collection cost can be lowered, and the success rate of answering of the automatic question-answering system can be increased.

Book ChapterDOI
26 Jan 2015
TL;DR: This work formally defines private outsourced sorting and presents an efficient construction that is based on an encryption scheme with semi-homomorphic properties that guarantees that neither the server nor the coprocessor learn anything about user data as long as they are non-colluding.
Abstract: We propose a framework where a user can outsource his data to a cloud server in an encrypted form and then request the server to perform computations on this data and sort the result. Sorting is achieved via a novel protocol where the server is assisted by a secure coprocessor that is required to have only minimal computational and memory resources. The server and the coprocessor are assumed to be honest but curious, i.e., they honestly follow the protocol but are interested in learning more about the user data. We refer to the new protocol as private outsourced sorting since it guarantees that neither the server nor the coprocessor learn anything about user data as long as they are non-colluding. We formally define private outsourced sorting and present an efficient construction that is based on an encryption scheme with semi-homomorphic properties.

Journal ArticleDOI
TL;DR: This paper presents the comparison of two hybrid methodologies for the two-objective (cost and resilience) design of water distribution systems, in which a main controller (the non-dominated sorting genetic algorithm II, NSGA-II) coordinates various subordinate algorithms.
Abstract: This paper presents the comparison of two hybrid methodologies for the two-objective (cost and resilience) design of water distribution systems. The first method is a low-level hybrid algorithm (LLHA), in which a main controller (the non-dominated sorting genetic algorithm II, NSGA-II) coordinates various subordinate algorithms. The second method is a high-level hybrid algorithm (HLHA), in which various sub-algorithms collaborate in parallel. Applications to four case studies of increasing complexity enable the performances of the hybrid algorithms to be compared with each other and with the performance of the NSGA-II. In the case study featuring low/intermediate complexity, the hybrid algorithms (especially the HLHA) successfully capture a more diversified Pareto front, although the NSGA-II shows the best convergence. When network complexity increases, instead, the hybrid algorithms (especially the LLHA) turn out to be superior in terms of both convergence and diversity. With respect to both the HLHA and the NSGA-II, the LLHA is capable of detecting the final front in a single run with a lower computation burden. In contrast, the HLHA and the NSGA-II, which are more affected by the initial random seed, require numerous runs with an attempt to reach the definitive Pareto front. On the other hand, a drawback of the LLHA lies in its reduced ability to deal with general problem formulations, i.e., those not relating to water distribution optimal design.

Journal ArticleDOI
TL;DR: A new high-fidelity reversible data hiding scheme that can introduce less image distortion at the same embedding payload is proposed using the prediction-error histogram modification technique.

Journal ArticleDOI
TL;DR: A new multiple criteria sorting approach that uses characteristic profiles for defining the classes and outranking relation as the preference model, similarly to the Electre Tri-C method is presented.
Abstract: We present a new multiple criteria sorting approach that uses characteristic profiles for defining the classes and outranking relation as the preference model, similarly to the Electre Tri-C method. We reformulate the conditions for the worst and best class assignments of Electre Tri-C to increase comprehensibility of the method and interpretability of the results it delivers. Then, we present a disaggregation procedure for inferring the set of outranking models compatible with the given preference information, and use the set in deriving, for each decision alternative, the necessary and possible assignments. Furthermore, we introduce simplified assignment procedures and prove that they maintain a no class jumps-property in the possible assignments. Application of the proposed approach is demonstrated by classifying 40 land zones in 4 classes representing different risk levels.

Journal ArticleDOI
01 Jul 2015
TL;DR: Several multi-objective Pareto-based optimization algorithms are presented and the algorithms were analyzed statistically and graphically to find the optimal value for both order quantity and reorder point through minimizing the total cost and maximizing the service level of the proposed model simultaneously.
Abstract: We propose a bi-objective multi-product (r,Q) inventory.We consider budget limitation and storage space in the model.We present several multi-objective Pareto-based optimization algorithms.The algorithms are compared with best-developed multi-objective algorithms. In this paper, a bi-objective multi-product (r,Q) inventory model in which the inventory level is reviewed continuously is proposed. The aim of this work is to find the optimal value for both order quantity and reorder point through minimizing the total cost and maximizing the service level of the proposed model simultaneously. It is assumed that shortage could occur and unsatisfied demand could be backordered, too. There is a budget limitation and storage space constraint in the model. With regard to complexity of the proposed model, several Pareto-based meta-heuristic approaches such as multi-objective vibration damping optimization (MOVDO), multi-objective imperialist competitive algorithm (MOICA), multi-objective particle swarm optimization (MOPSO), non-dominated ranked genetic algorithm (NRGA), and non-dominated sorting genetic algorithm (NSGA-II) are applied to solve the model. In order to compare the results, several numerical examples are generated and then the algorithms were analyzed statistically and graphically.

Proceedings ArticleDOI
27 May 2015
TL;DR: This paper argues that in terms of cache efficiency, the two paradigms of hashing and sorting are actually the same, and designs an algorithmic framework that allows to switch seamlessly between hashing and sorted routines during execution.
Abstract: For decades researchers have studied the duality of hashing and sorting for the implementation of the relational operators, especially for efficient aggregation. Depending on the underlying hardware and software architecture, the specifically implemented algorithms, and the data sets used in the experiments, different authors came to different conclusions about which is the better approach. In this paper we argue that in terms of cache efficiency, the two paradigms are actually the same. We support our claim by showing that the complexity of hashing is the same as the complexity of sorting in the external memory model. Furthermore we make the similarity of the two approaches obvious by designing an algorithmic framework that allows to switch seamlessly between hashing and sorting during execution. The fact that we mix hashing and sorting routines in the same algorithmic framework allows us to leverage the advantages of both approaches and makes their similarity obvious. On a more practical note, we also show how to achieve very low constant factors by tuning both the hashing and the sorting routines to modern hardware. Since we observe a complementary dependency of the constant factors of the two routines to the locality of the input, we exploit our framework to switch to the faster routine where appropriate. The result is a novel relational aggregation algorithm that is cache-efficient---independently and without prior knowledge of input skew and output cardinality---, highly parallelizable on modern multi-core systems, and operating at a speed close to the memory bandwidth, thus outperforming the state-of-the-art by up to 3.7x.

Journal ArticleDOI
TL;DR: Moving and sorting cells with sound are a few of the possible applications for this no-contact technique.
Abstract: Moving and sorting cells with sound are a few of the possible applications for this no-contact technique.

Journal ArticleDOI
TL;DR: A sorting method based on Fuzzy Set Theory and FlowSort, which is a Promethee-based sorting method and the result confirms the potential and the applicability of associating fuzzy logic with the FlowSort method for processing imprecise data.

Journal ArticleDOI
TL;DR: A new multi-objective path-finding model is proposed to find optimal paths in road networks with time-dependent stochastic travel times and it is demonstrated that the proposed approach is able to provide a set of non-dominated paths from which travelers can choose their paths based on their attitudes toward travel time uncertainty.
Abstract: We propose a multi-objective path finding model.We consider the stochastic and time-varying nature of travel time in our model.The model is solved by the non-dominated sorting genetic algorithm.The Taguchi method is used to tune the parameters of the genetic algorithm. In this paper, a new multi-objective path-finding model is proposed to find optimal paths in road networks with time-dependent stochastic travel times. This study is motivated by the fact that different travelers usually have different route-choice preferences, often involving multiple conflicting criteria such as expected path travel time, variance of path travel time and so forth. However, most of the existing studies have only considered the expected value of path travel time as the sole decision criterion. In order to solve the multi-objective model, the non-dominated sorting genetic algorithm is employed and its parameters are tuned by the Taguchi method. Moreover, a dynamic n-point crossover operator is developed to enhance the search capability of the genetic algorithm. Experimental results on a grid network demonstrate that the proposed approach is able to provide a set of non-dominated paths from which travelers can choose their paths based on their attitudes toward travel time uncertainty. Statistical analysis confirms that the dynamic n-point crossover operator outperforms the traditional one-point crossover operator.

Journal ArticleDOI
John Yinger1
TL;DR: In this paper, the authors build on the theory of household bidding and sorting across communities to derive bid-function envelopes, which provide a form for these regressions, allowing for household heterogeneity and multiple amenities, yields estimates of the price elasticity of amenity demand directly from the hedonic without a Rosen two-step procedure.

Journal ArticleDOI
TL;DR: In this article, a case study from Vellinge municipality (Sweden), where the introduction of separate food waste collection is thought to have a role in reducing the total amount of household waste and improving the sorting of packaging waste, is discussed.

Journal ArticleDOI
TL;DR: This paper examined whether or not geographic sorting has occurred and why it has occurred using a novel, dynamic analysis and found evidence that migration can drive partisan sorting, but only accounts for a small portion of the change.

Journal ArticleDOI
01 Jul 2015
TL;DR: This paper describes a new algorithm for sorting an array of structures by efficiently exploiting the SIMD instructions and cache memory of today's processors based on multiway mergesort, and shows that this approach exhibited up to 2.1x better single-thread performance than the key-index approach implemented withSIMD instructions when sorting 512M 16-byte records on one core.
Abstract: This paper describes our new algorithm for sorting an array of structures by efficiently exploiting the SIMD instructions and cache memory of today's processors. Recently, multiway mergesort implemented with SIMD instructions has been used as a high-performance in-memory sorting algorithm for sorting integer values. For sorting an array of structures with SIMD instructions, a frequently used approach is to first pack the key and index for each record into an integer value, sort the key-index pairs using SIMD instructions, then rearrange the records based on the sorted key-index pairs. This approach can efficiently exploit SIMD instructions because it sorts the key-index pairs while packed into integer values; hence, it can use existing high-performance sorting implementations of the SIMD-based multiway mergesort for integers. However, this approach has frequent cache misses in the final rearranging phase due to its random and scattered memory accesses so that this phase limits both single-thread performance and scalability with multiple cores. Our approach is also based on multiway mergesort, but it can avoid costly random accesses for rearranging the records while still efficiently exploiting the SIMD instructions. Our results showed that our approach exhibited up to 2.1x better single-thread performance than the key-index approach implemented with SIMD instructions when sorting 512M 16-byte records on one core. Our approach also yielded better performance when we used multiple cores. Compared to an optimized radix sort, our vectorized multiway mergesort achieved better performance when the each record is large. Our vectorized multiway mergesort also yielded higher scalability with multiple cores than the radix sort.

Journal ArticleDOI
TL;DR: In this paper, a case study about waste sorting infrastructure performance carried out in two buildings in Gothenburg, Sweden, reveals mismatches between users' needs and what the system offers, affecting the sorting rates and quality of the sorted material.

Journal ArticleDOI
TL;DR: The proposed method is a hybridization of the conventional cuckoo search algorithm and arithmetic crossover operations, and the non-linear, non-convex objective function can be solved under practical constraints.