scispace - formally typeset
Search or ask a question

Showing papers on "Sorting published in 2013"


Journal ArticleDOI
TL;DR: A survey of the state of the art on equilibrium sorting can be found in this paper, where the authors synthesize the current state-of-the-art, the new possibilities for policy analysis, and the conceptual and empirical challenges that define the equilibrium sorting process.
Abstract: *Households “sort” across neighborhoods according to their wealth and their preferences for public goods, social characteristics, and commuting opportunities. The aggregation of these individual choices in markets and in other institutions influences the supply of amenities and local public goods. Pollution, congestion, and the quality of public education are examples. Over the past decade, advances in economic models of this sorting process have led to a new framework that promises to alter the ways we conceptualize the policy evaluation process in the future. These “equilibrium sorting” models use the properties of market equilibria, together with information on household behavior, to infer structural parameters that characterize preference heterogeneity. The results can be used to develop theoretically consistent predictions for the welfare implications of future policy changes. Analysis is not confined to marginal effects or a partial equilibrium setting. Nor is it limited to prices and quantities. Sorting models can integrate descriptions of how nonmarket goods are generated, estimate how they affect decision making, and, in turn, predict how they will be affected by future policies targeting prices or quantities. Conversely, sorting models can predict how equilibrium prices and quantities will be affected by policies that target product quality, information, or amenities generated by the sorting process. These capabilities are just beginning to be understood and used in applied research. This survey article aims to synthesize the state of knowledge on equilibrium sorting, the new possibilities for policy analysis, and the conceptual and empirical challenges that define the

201 citations


Proceedings ArticleDOI
22 Jul 2013
TL;DR: Deterministic constant-time solutions for two problems in a clique of nodes, where in each synchronous round each pair of nodes can exchange O(log n) bits.
Abstract: Consider a clique of n nodes, where in each synchronous round each pair of nodes can exchange O(log n) bits. We provide deterministic constant-time solutions for two problems in this model. The first is a routing problem where each node is source and destination of n messages of size O(log n). The second is a sorting problem where each node i is given n keys of size O(log n) and needs to receive the ith batch of n keys according to the global order of the keys. The latter result also implies deterministic constant-round solutions for related problems such as selection or determining modes.

158 citations


Journal ArticleDOI
TL;DR: Experimental results supported by nonparametric statistical tests suggest that MOBiDE is able to provide better and more consistent performance over the existing well-known multimodal algorithms for majority of the test problems without incurring any serious computational burden.
Abstract: In contrast to the numerous research works that integrate a niching scheme with an existing single-objective evolutionary algorithm to perform multimodal optimization, a few approaches have recently been taken to recast multimodal optimization as a multiobjective optimization problem to be solved by modified multiobjective evolutionary algorithms. Following this promising avenue of research, we propose a novel biobjective formulation of the multimodal optimization problem and use differential evolution (DE) with nondominated sorting followed by hypervolume measure-based sorting to finally detect a set of solutions corresponding to multiple global and local optima of the function under test. Unlike the two earlier multiobjective approaches (biobjective multipopulation genetic algorithm and niching-based nondominated sorting genetic algorithm II), the proposed multimodal optimization with biobjective DE (MOBiDE) algorithm does not require the actual or estimated gradient of the multimodal function to form its second objective. Performance of MOBiDE is compared with eight state-of-the-art single-objective niching algorithms and two recently developed biobjective niching algorithms using a test suite of 14 basic and 15 composite multimodal problems. Experimental results supported by nonparametric statistical tests suggest that MOBiDE is able to provide better and more consistent performance over the existing well-known multimodal algorithms for majority of the test problems without incurring any serious computational burden.

119 citations


Journal ArticleDOI
TL;DR: An adapted version of the non- dominated sorting genetic algorithm (NSGA-II) is proposed to solve the problem of reconfigured manufacturing systems (RMSs) design based on products specifications and reconfigurable machines capabilities.

112 citations


Journal ArticleDOI
01 Apr 2013
TL;DR: A Pareto-based meta-heuristic algorithm called multi-objective harmony search (MOHS) is proposed, in which facilities behave as M/M/m queues within multi-server queuing framework, which shows that the proposed MOHS outperforms the other two algorithms in terms of computational time.
Abstract: In this paper, a novel multi-objective location model within multi-server queuing framework is proposed, in which facilities behave as M/M/m queues. In the developed model of the problem, the constraints of selecting the nearest-facility along with the service level restriction are considered to bring the model closer to reality. Three objective functions are also considered including minimizing (I) sum of the aggregate travel and waiting times, (II) maximum idle time of all facilities, and (III) the budget required to cover the costs of establishing the selected facilities plus server staffing costs. Since the developed model of the problem is of an NP-hard type and inexact solutions are more probable to be obtained, soft computing techniques, specifically evolutionary computations, are generally used to cope with the lack of precision. From different terms of evolutionary computations, this paper proposes a Pareto-based meta-heuristic algorithm called multi-objective harmony search (MOHS) to solve the problem. To validate the results obtained, two popular algorithms including non-dominated sorting genetic algorithm (NSGA-II) and non-dominated ranking genetic algorithm (NRGA) are utilized as well. In order to demonstrate the proposed methodology and to compare the performances in terms of Pareto-based solution measures, the Taguchi approach is first utilized to tune the parameters of the proposed algorithms, where a new response metric named multi-objective coefficient of variation (MOCV) is introduced. Then, the results of implementing the algorithms on some test problems show that the proposed MOHS outperforms the other two algorithms in terms of computational time.

101 citations


Journal ArticleDOI
Mousumi Basu1
TL;DR: Nondominated sorting genetic algorithm-II is proposed to handle economic emission dispatch as a true multi-objective optimization problem with competing and noncommensurable objectives.

92 citations


Journal ArticleDOI
01 Apr 2013
TL;DR: A new approach for multiple criteria sorting problems applying general additive value functions compatible with the given assignment examples is presented and application is demonstrated by classifying 27 countries in 4 democracy regimes.
Abstract: We present a new approach for multiple criteria sorting problems. We consider sorting procedures applying general additive value functions compatible with the given assignment examples. For the decision alternatives, we provide four types of results: (1) necessary and possible assignments from Robust Ordinal Regression (ROR), (2) class acceptability indices from a suitably adapted Stochastic Multicriteria Acceptability Analysis (SMAA) model, (3) necessary and possible assignment-based preference relations, and (4) assignment-based pair-wise outranking indices. We show how the results provided by ROR and SMAA complement each other and combine them under a unified decision aiding framework. Application of the approach is demonstrated by classifying 27 countries in 4 democracy regimes.

82 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a parallel code for computing the dynamical evolution of collisional N-body systems with up to N ∼ 10 7 particles, based on the HMonte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium.
Abstract: We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N ∼ 10 7 particles. Our code is based on the HMonte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 10 5 to 10 7 . We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N = 10 5 , 128 for N = 10 6 and 256 for N = 10 7 . The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60×, 100×, and 220×, respectively.

81 citations


Journal ArticleDOI
TL;DR: A general method to harness flow cytometry, with its unmatched speed and sensitivity, for droplet-based microfluidic sorting for high-throughput biology applications is demonstrated.
Abstract: The detection and sorting of aqueous drops is central to microfluidic workflows for high-throughput biology applications, including directed evolution, digital PCR, and antibody screening. However, high-throughput detection and sorting of drops require optical systems and microfluidic components that are complex, difficult to build, and often yield inadequate sensitivity and throughput. Here, we demonstrate a general method to harness flow cytometry, with its unmatched speed and sensitivity, for droplet-based microfluidic sorting.

76 citations


22 Jul 2013
TL;DR: A supply chain consisting of a central remanufacturing facility and a number of collection sites is studied, obtaining the optimal parameters of the replenishment policy under each one of the three possible system configurations: no sorting, central sorting and local sorting.
Abstract: We study a supply chain consisting of a central remanufacturing facility and a number of collection sites (CS). The central facility procures returned items from the CS for remanufacturing and sale. We examine whether it is advisable to establish a sorting procedure that identifies those units that are suitable for remanufacturing before disassembly. Sorting is subject to classification errors and may be performed either centrally or locally at the CS. Assuming stochastic demand, infinite horizon and deterministic yield at the CS we obtain the optimal parameters of the replenishment policy under each one of the three possible system configurations: no sorting, central sorting and local sorting. It is then easy to determine the value of sorting and the preferred CS for the procurement of returned items.

76 citations


Journal ArticleDOI
TL;DR: This work introduces a multicore-oblivious (MO) approach to algorithms and schedulers for HM, and presents efficient MO algorithms for several fundamental problems including matrix transposition, FFT, sorting, the Gaussian Elimination Paradigm, list ranking, and connected components.

Posted Content
TL;DR: In this paper, the authors present theory and evidence that tighter credit constrains force firms to produce lower quality, and they develop a quality sorting model that predicts that firms reduce their optimal prices due to their choice of lower quality products.
Abstract: This paper presents theory and evidence that tighter credit constrains force firms to produce lower quality. The paper develops a quality sorting model that predicts that tighter credit constraints faced by a firm reduce its optimal prices due to its choice of lower-quality products. Conversely, when quality cannot be chosen by a firm in an efficiency sorting model, prices increase as firms face tighter credit constraints. An empirical analysis using Chinese bank loans data and a merged sample based on Chinese firm-level data and Chinese customs data strongly supports quality sorting and confirms the mechanism of quality adjustment.

Journal ArticleDOI
Ge Nong1
TL;DR: In this experiment, SACA-K outperforms SA-IS that was previously the most time- and space-efficient linear-time SA construction algorithm (SACA), and is around 33% faster and uses a smaller deterministic workspace of K words, where the workspace is the space needed beyond the input string and the output SA.
Abstract: This article presents an O(n)-time algorithm called SACA-K for sorting the suffixes of an input string T[0, n-1] over an alphabet A[0, K-1]. The problem of sorting the suffixes of T is also known as constructing the suffix array (SA) for T. The theoretical memory usage of SACA-K is nlogK p nlogn p Klogn bits. Moreover, we also have a practical implementation for SACA-K that uses n bytes p (n p 256) words and is suitable for strings over any alphabet up to full ASCII, where a word is log n bits. In our experiment, SACA-K outperforms SA-IS that was previously the most time- and space-efficient linear-time SA construction algorithm (SACA). SACA-K is around 33p faster and uses a smaller deterministic workspace of K words, where the workspace is the space needed beyond the input string and the output SA. Given K=O(1), SACA-K runs in linear time and O(1) workspace. To the best of our knowledge, such a result is the first reported in the literature with a practical source code publicly available.

Proceedings ArticleDOI
06 Jul 2013
TL;DR: The paper shows that generalizing the Jensen algorithm can be achieved without affecting its time complexity, and experimental results are provided to demonstrate speedups of up to two orders of magnitude for common problem sizes, when compared with the correct baseline algorithm from Deb.
Abstract: This paper generalizes the "Improved Run-Time Complexity Algorithm for Non-Dominated Sorting" by Jensen, removing its limitation that no two solutions can share identical values for any of the problem's objectives. This constraint is especially limiting for discrete combinatorial problems, but can also lead the Jensen algorithm to produce incorrect results even for problems that appear to have a continuous nature, but for which identical objective values are nevertheless possible. Moreover, even when values are not meant to be identical, the limited precision of floating point numbers can sometimes make them equal anyway. Thus a fast and correct algorithm is needed for the general case. The paper shows that generalizing the Jensen algorithm can be achieved without affecting its time complexity, and experimental results are provided to demonstrate speedups of up to two orders of magnitude for common problem sizes, when compared with the correct baseline algorithm from Deb.

Journal ArticleDOI
TL;DR: A detailed analysis of these sorting units indicates that as the number of inputs increases their resource requirements scale linearly, their latencies scale logarithmically, and their frequencies remain almost constant.
Abstract: High-throughput and low-latency sorting is a key requirement in many applications that deal with large amounts of data. This paper presents efficient techniques for designing high-throughput, low-latency sorting units. Our sorting architectures utilize modular design techniques that hierarchically construct large sorting units from smaller building blocks. The sorting units are optimized for situations in which only the M largest numbers from N inputs are needed, because this situation commonly occurs in many applications for scientific computing, data mining, network processing, digital signal processing, and high-energy physics. We utilize our proposed techniques to design parameterized, pipelined, and modular sorting units. A detailed analysis of these sorting units indicates that as the number of inputs increases their resource requirements scale linearly, their latencies scale logarithmically, and their frequencies remain almost constant. When synthesized to a 65-nm TSMC technology, a pipelined 256-to-4 sorting unit with 19 stages can perform more than 2.7 billion sorts per second with a latency of about 7 ns per sort. We also propose iterative sorting techniques, in which a small sorting unit is used several times to find the largest values.

Journal ArticleDOI
TL;DR: This work aims at improving a previous optimization approach proposed by some of the authors, based on the modified binary differential evolution (MBDE) algorithm, and introduces a third objective function to minimize the impacts of the switching off operations onto the existing network topology.

Journal ArticleDOI
TL;DR: The modified NSGA-II is observed to perform better than the original NS GA-II and the proposed mutation algorithm also works effectively, as evident from the experimental results.

Journal ArticleDOI
TL;DR: The proposed system enables two-dimensional cell sorting without necessitating complicated setups and operations, and thus, it can be a useful tool for general biological experiments including cell-based disease diagnosis, stem cell engineering, and cellular physiological studies.
Abstract: A simple microfluidic system has been presented to perform continuous two-parameter cell sorting based on size and surface markers. Immunomagnetic bead-conjugated cells are initially sorted based on size by utilizing the hydrodynamic filtration (HDF) scheme, introduced into individual separation lanes, and simultaneously focused onto one sidewall by the hydrodynamic effect. Cells are then subjected to magnetophoretic separation in the lateral direction, and finally they are individually recovered through multiple outlet branches. We successfully demonstrated the continuous sorting of JM (human lymphocyte cell line) cells using anti-CD4 immunomagnetic beads and confirmed that accurate size- and surface marker-based sorting was achieved. In addition, the sorting of cell mixtures was performed at purification ratios higher than 90%. The proposed system enables two-dimensional cell sorting without necessitating complicated setups and operations, and thus, it can be a useful tool for general biological experiments including cell-based disease diagnosis, stem cell engineering, and cellular physiological studies.

Journal ArticleDOI
TL;DR: A novel detection algorithm with an efficient VLSI architecture featuring efficient operation over infinite complex lattices and support of unbounded infinite lattice decoding distinguishes the present method from previous K-Best strategies and also allows its complexity to scale sublinearly with the modulation order.
Abstract: A novel detection algorithm with an efficient VLSI architecture featuring efficient operation over infinite complex lattices is proposed. The proposed design results in the highest throughput, the lowest latency, and the lowest energy compared to the complex-domain VLSI implementations to date. The main innovations are a novel complex-domain means of expanding/visiting the intermediate nodes of the search tree on demand, rather than exhaustively, as well as a new distributed sorting scheme to keep track of the best candidates at each search phase. Its support of unbounded infinite lattice decoding distinguishes the present method from previous K-Best strategies and also allows its complexity to scale sublinearly with the modulation order. Since the expansion and sorting cores are data-driven, the architecture is well suited for a pipelined parallel VLSI implementation. The proposed algorithm is used to fabricate a 4×4, 64-QAM complex multiple-input-multiple-output detector in a 0.13-μm CMOS technology, achieving a clock rate of 417 MHz with the core area of 340 kgates. The chip test results prove that the fabricated design can sustain a throughput of 1 Gb/s with energy efficiency of 110 pJ/bit, the best numbers reported to date.

Journal ArticleDOI
Bahriye Akay1
TL;DR: Results show that S-MOABC/NS can provide good approximations to well distributed and high quality non-dominated fronts and can be used as a promising alternative tool to solve multi-objective problems with the advantage of being simple and employing a few control parameters.
Abstract: Pareto-based multi-objective optimization algorithms prefer non-dominated solutions over dominated solutions and maintain as much as possible diversity in the Pareto optimal set to represent the whole Pareto-front. This paper proposes three multi-objective Artificial Bee Colony (ABC) algorithms based on synchronous and asynchronous models using Pareto-dominance and non-dominated sorting: asynchronous multi-objective ABC using only Pareto-dominance rule (A-MOABC/PD), asynchronous multi-objective ABC using non-dominated sorting procedure (A-MOABC/NS) and synchronous multi-objective ABC using non-dominated sorting procedure (S-MOABC/NS). These algorithms were investigated in terms of the inverted generational distance, hypervolume and spread performance metrics, running time, approximation to whole Pareto-front and Pareto-solutions spaces. It was shown that S-MOABC/NS is more scalable and efficient compared to its asynchronous counterpart and more efficient and robust than A-MOABC/PD. An investigation on parameter sensitivity of S-MOABC/NS was presented to relate the behavior of the algorithm to the values of the control parameters. The results of S-MOABC/NS were compared to some state-of-the art algorithms. Results show that S-MOABC/NS can provide good approximations to well distributed and high quality non-dominated fronts and can be used as a promising alternative tool to solve multi-objective problems with the advantage of being simple and employing a few control parameters.

Proceedings ArticleDOI
06 Jul 2013
TL;DR: Prioritized Grammar Enumeration (PGE), a deterministic Symbolic Regression algorithm using dynamic programming techniques, which replaces genetic operators and random number use with grammar production rules and systematic choices and leads the community to new ideas.
Abstract: We introduce Prioritized Grammar Enumeration (PGE), a deterministic Symbolic Regression (SR) algorithm using dynamic programming techniques. PGE maintains the tree-based representation and Pareto non-dominated sorting from Genetic Programming (GP), but replaces genetic operators and random number use with grammar production rules and systematic choices. PGE uses non-linear regression and abstract parameters to fit the coefficients of an equation, effectively separating the exploration for form, from the optimization of a form. Memoization enables PGE to evaluate each point of the search space only once, and a Pareto Priority Queue provides direction to the search. Sorting and simplification algorithms are used to transform candidate expressions into a canonical form, reducing the size of the search space. Our results show that PGE performs well on 22 benchmarks from the SR literature, returning exact formulas in many cases. As a deterministic algorithm, PGE offers reliability and reproducibility of results, a key aspect to any system used by scientists at large. We believe PGE is a capable SR implementation, following an alternative perspective we hope leads the community to new ideas.

23 Jun 2013
TL;DR: A smart bin application based on information self-contained in tags associated to each waste item using a RFID-based system without requiring the support of an external information system is proposed.
Abstract: Radio Frequency Identification (RFID) is a pervasive computing technology that can be used to improve waste management by providing early automatic identification of waste at bin level. In this paper, we propose a smart bin application based on information self-contained in tags associated to each waste item. The wastes are tracked by smart bins using a RFID-based system without requiring the support of an external information system. Two crucial features of the selective sorting process can be improved using this approach. First, the user is helped in the application of selective sorting. Second, the smart bin knows its content and can report back to the rest of the recycling chain.

01 Jan 2013
TL;DR: This study considers a sorting method in which categories are defined by profiles separating consecutive categories, that corresponds to a simplified version of ELECTRE Tri, and considers a learning procedure that relies on a set of known assignment examples to find parameters compatible with these assignments.

Journal ArticleDOI
01 May 2013
TL;DR: This paper attempts to solve the automatic test task scheduling problem (TTSP) with the objectives of minimizing the maximal test completion time (makespan) and the mean workload of the instruments.
Abstract: Solving a task scheduling problem is a key challenge for automatic test technology to improve throughput, reduce test time, and operate the necessary instruments at their maximum capacity. Therefore, this paper attempts to solve the automatic test task scheduling problem (TTSP) with the objectives of minimizing the maximal test completion time (makespan) and the mean workload of the instruments. In this paper, the formal formulation and the constraints of the TTSP are established to describe this problem. Then, a new encoding method called the integrated encoding scheme (IES) is proposed. This encoding scheme is able to transform a combinatorial optimization problem into a continuous optimization problem, thus improving the encoding efficiency and reducing the complexity of the genetic manipulations. More importantly, because the TTSP has many local optima, a chaotic non-dominated sorting genetic algorithm (CNSGA) is presented to avoid becoming trapped in local optima and to obtain high quality solutions. This approach introduces a chaotic initial population, a crossover operator, and a mutation operator into the non-dominated sorting genetic algorithm II (NSGA-II) to enhance the local searching ability. Both the logistic map and the cat map are used to design the chaotic operators, and their performances are compared. To identify a good approach for hybridizing NSGA-II and chaos, and indicate the effectiveness of IES, several experiments are performed based on the following: (1) a small-scale TTSP and a large-scale TTSP in real-world applications and (2) a TTSP used in other research. Computational simulations and comparisons show that CNSGA improves the local searching ability and is suitable for solving the TTSP.

Book ChapterDOI
12 Nov 2013
TL;DR: A new metaheuristic designed to learn the parameters of an MR-Sort model that works in two phases that are iterated and reports the results of numerical tests, providing insights on the algorithm behavior.
Abstract: Learning the parameters of a Majority Rule Sorting model MR-Sort through linear programming requires to use binary variables. In the context of preference learning where large sets of alternatives and numerous attributes are involved, such an approach is not an option in view of the large computing times implied. Therefore, we propose a new metaheuristic designed to learn the parameters of an MR-Sort model. This algorithm works in two phases that are iterated. The first one consists in solving a linear program determining the weights and the majority threshold, assuming a given set of profiles. The second phase runs a metaheuristic which determines profiles for a fixed set of weights and a majority threshold. The presentation focuses on the metaheuristic and reports the results of numerical tests, providing insights on the algorithm behavior.

Journal ArticleDOI
TL;DR: In this article, a glyph-based conceptual framework is presented for interactive sorting of multivariate data, which is one of the most common analytical tasks performed on individual attributes of a multi-dimensional data set.
Abstract: Glyph-based visualization is an effective tool for depicting multivariate information. Since sorting is one of the most common analytical tasks performed on individual attributes of a multi-dimensional data set, this motivates the hypothesis that introducing glyph sorting would significantly enhance the usability of glyph-based visualization. In this paper, we present a glyph-based conceptual framework as part of a visualization process for interactive sorting of multivariate data. We examine several technical aspects of glyph sorting and provide design principles for developing effective, visually sortable glyphs. Glyphs that are visually sortable provide two key benefits: 1) performing comparative analysis of multiple attributes between glyphs and 2) to support multi-dimensional visual search. We describe a system that incorporates focus and context glyphs to control sorting in a visually intuitive manner and for viewing sorted results in an Interactive, Multi-dimensional Glyph (IMG) plot that enables users to perform high-dimensional sorting, analyse and examine data trends in detail. To demonstrate the usability of glyph sorting, we present a case study in rugby event analysis for comparing and analysing trends within matches. This work is undertaken in conjunction with a national rugby team. From using glyph sorting, analysts have reported the discovery of new insight beyond traditional match analysis.

Journal ArticleDOI
TL;DR: Results show that defect detection, shape and size algorithm, and overall system accuracies were 84.4%, 90.9%, 94.5%, and 90%, respectively.
Abstract: Online sorting of tomatoes according to their features is an important postharvest procedure. The purpose of this research was to develop an efficient machine vision-based experimental sorting system for tomatoes. Relevant sorting parameters included shape (oblong and circular), size (small and large), maturity (color), and defects. The variables defining shape, maturity, and size of the tomatoes were eccentricity, average of color components, and 2-D pixel area, respectively. Tomato defects include color disorders, growth cracks, sunscald, and early blight. The sorting system involved the use of a CCD camera, a microcontroller, sensors, and a computer. Images were analyzed with an algorithm that was developed using Visual Basic 2008. In order to evaluate the accuracy of the algorithms and system performance, 210 tomato samples were used. Each detection algorithm was applied to all images. Data about the type of each sample image, including healthy or defective, elongated or rounded, small or large, and color, were extracted. Results show that defect detection, shape and size algorithm, and overall system accuracies were 84.4%, 90.9%, 94.5%, and 90%, respectively. System sorting performance was estimated at 2517 tomatoes h -1 with 1 line.

Journal ArticleDOI
TL;DR: This Letter uses computational evolution to predict design features of networks processing ligand categorization, and draws a deep analogy between immune recognition and biochemical adaptation.
Abstract: Many biological networks have to filter out useful information from a vast excess of spurious interactions. In this Letter, we use computational evolution to predict design features of networks processing ligand categorization. The important problem of early immune response is considered as a case study. Rounds of evolution with different constraints uncover elaborations of the same network motif we name ``adaptive sorting.'' Corresponding network substructures can be identified in current models of immune recognition. Our work draws a deep analogy between immune recognition and biochemical adaptation.

Proceedings ArticleDOI
15 Nov 2013
TL;DR: All the three algorithms reported good solutions whereas GA and NSGA are subjected to premature convergence and duplicate solutions, while NSGA-II gives good and diversified range of solutions.
Abstract: In this paper, the virtual machine placement problem is formulated as a multi-objective optimization problem. The objectives are maximizing profit, maximizing load balancing and minimizing recourse wastage. Results of Genetic algorithms, Non-dominated Sorting Genetic Algorithm and Non-dominated Sorting Genetic Algorithm-II are compared with common solution representation, penalty and benefit values. All the three algorithms reported good solutions whereas GA and NSGA are subjected to premature convergence and duplicate solutions. NSGA-II gives good and diversified range of solutions.

Journal ArticleDOI
TL;DR: In this article, a sorting decision between two variable compensation systems, where both options carry wage risks, is studied, and the authors find evidence for both risk diversification considerations and free-riding concerns as drivers of self-selection.